00:00:00.001 Started by upstream project "autotest-per-patch" build number 132521 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.042 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.083 Using shallow fetch with depth 1 00:00:00.083 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.083 > git --version # timeout=10 00:00:00.152 > git --version # 'git version 2.39.2' 00:00:00.152 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.210 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.210 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.713 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.724 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.737 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.737 > git config core.sparsecheckout # timeout=10 00:00:02.749 > git read-tree -mu HEAD # timeout=10 00:00:02.766 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.794 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.794 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.902 [Pipeline] Start of Pipeline 00:00:02.916 [Pipeline] library 00:00:02.918 Loading library shm_lib@master 00:00:02.918 Library shm_lib@master is cached. Copying from home. 00:00:02.933 [Pipeline] node 00:00:02.944 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:02.945 [Pipeline] { 00:00:02.956 [Pipeline] catchError 00:00:02.958 [Pipeline] { 00:00:02.971 [Pipeline] wrap 00:00:02.980 [Pipeline] { 00:00:02.989 [Pipeline] stage 00:00:02.991 [Pipeline] { (Prologue) 00:00:03.010 [Pipeline] echo 00:00:03.012 Node: VM-host-WFP7 00:00:03.018 [Pipeline] cleanWs 00:00:03.028 [WS-CLEANUP] Deleting project workspace... 00:00:03.028 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.035 [WS-CLEANUP] done 00:00:03.237 [Pipeline] setCustomBuildProperty 00:00:03.328 [Pipeline] httpRequest 00:00:04.087 [Pipeline] echo 00:00:04.089 Sorcerer 10.211.164.20 is alive 00:00:04.098 [Pipeline] retry 00:00:04.100 [Pipeline] { 00:00:04.113 [Pipeline] httpRequest 00:00:04.117 HttpMethod: GET 00:00:04.118 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.119 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.138 Response Code: HTTP/1.1 200 OK 00:00:04.139 Success: Status code 200 is in the accepted range: 200,404 00:00:04.139 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.945 [Pipeline] } 00:00:26.968 [Pipeline] // retry 00:00:26.977 [Pipeline] sh 00:00:27.264 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.283 [Pipeline] httpRequest 00:00:27.725 [Pipeline] echo 00:00:27.727 Sorcerer 10.211.164.20 is alive 00:00:27.737 [Pipeline] retry 00:00:27.740 [Pipeline] { 00:00:27.755 [Pipeline] httpRequest 00:00:27.761 HttpMethod: GET 00:00:27.761 URL: http://10.211.164.20/packages/spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:00:27.762 Sending request to url: http://10.211.164.20/packages/spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:00:27.767 Response Code: HTTP/1.1 200 OK 00:00:27.767 Success: Status code 200 is in the accepted range: 200,404 00:00:27.768 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:02:52.732 [Pipeline] } 00:02:52.749 [Pipeline] // retry 00:02:52.757 [Pipeline] sh 00:02:53.042 + tar --no-same-owner -xf spdk_8afd1c921c6aa1340e442a866f4aeb155cdec456.tar.gz 00:02:55.600 [Pipeline] sh 00:02:55.886 + git -C spdk log --oneline -n5 00:02:55.886 8afd1c921 blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:02:55.886 9c7e54d62 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:02:55.886 9ebbe7008 blob: fix possible memory leak in bs loading 00:02:55.886 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:02:55.886 9885e1d29 lib/blob: cluster_sz must be a multiple of PAGE 00:02:55.906 [Pipeline] writeFile 00:02:55.922 [Pipeline] sh 00:02:56.209 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:56.222 [Pipeline] sh 00:02:56.505 + cat autorun-spdk.conf 00:02:56.505 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.505 SPDK_RUN_ASAN=1 00:02:56.505 SPDK_RUN_UBSAN=1 00:02:56.505 SPDK_TEST_RAID=1 00:02:56.505 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.513 RUN_NIGHTLY=0 00:02:56.514 [Pipeline] } 00:02:56.526 [Pipeline] // stage 00:02:56.539 [Pipeline] stage 00:02:56.541 [Pipeline] { (Run VM) 00:02:56.553 [Pipeline] sh 00:02:56.837 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:56.837 + echo 'Start stage prepare_nvme.sh' 00:02:56.837 Start stage prepare_nvme.sh 00:02:56.837 + [[ -n 1 ]] 00:02:56.837 + disk_prefix=ex1 00:02:56.837 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:02:56.837 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:02:56.837 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:02:56.837 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.837 ++ SPDK_RUN_ASAN=1 00:02:56.837 ++ SPDK_RUN_UBSAN=1 00:02:56.837 ++ SPDK_TEST_RAID=1 00:02:56.837 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.837 ++ RUN_NIGHTLY=0 00:02:56.837 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:02:56.837 + nvme_files=() 00:02:56.837 + declare -A nvme_files 00:02:56.837 + backend_dir=/var/lib/libvirt/images/backends 00:02:56.837 + nvme_files['nvme.img']=5G 00:02:56.837 + nvme_files['nvme-cmb.img']=5G 00:02:56.837 + nvme_files['nvme-multi0.img']=4G 00:02:56.837 + nvme_files['nvme-multi1.img']=4G 00:02:56.837 + nvme_files['nvme-multi2.img']=4G 00:02:56.837 + nvme_files['nvme-openstack.img']=8G 00:02:56.837 + nvme_files['nvme-zns.img']=5G 00:02:56.837 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:56.837 + (( SPDK_TEST_FTL == 1 )) 00:02:56.837 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:56.837 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:56.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:56.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:56.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:56.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:56.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:56.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:56.837 + for nvme in "${!nvme_files[@]}" 00:02:56.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:57.097 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:57.097 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:57.097 + echo 'End stage prepare_nvme.sh' 00:02:57.097 End stage prepare_nvme.sh 00:02:57.111 [Pipeline] sh 00:02:57.475 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:57.475 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:02:57.475 00:02:57.475 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:02:57.475 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:02:57.475 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:02:57.475 HELP=0 00:02:57.475 DRY_RUN=0 00:02:57.475 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:02:57.475 NVME_DISKS_TYPE=nvme,nvme, 00:02:57.475 NVME_AUTO_CREATE=0 00:02:57.475 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:02:57.475 NVME_CMB=,, 00:02:57.475 NVME_PMR=,, 00:02:57.475 NVME_ZNS=,, 00:02:57.475 NVME_MS=,, 00:02:57.475 NVME_FDP=,, 00:02:57.475 SPDK_VAGRANT_DISTRO=fedora39 00:02:57.475 SPDK_VAGRANT_VMCPU=10 00:02:57.475 SPDK_VAGRANT_VMRAM=12288 00:02:57.475 SPDK_VAGRANT_PROVIDER=libvirt 00:02:57.475 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:57.475 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:57.475 SPDK_OPENSTACK_NETWORK=0 00:02:57.476 VAGRANT_PACKAGE_BOX=0 00:02:57.476 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:02:57.476 FORCE_DISTRO=true 00:02:57.476 VAGRANT_BOX_VERSION= 00:02:57.476 EXTRA_VAGRANTFILES= 00:02:57.476 NIC_MODEL=virtio 00:02:57.476 00:02:57.476 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:02:57.476 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:03:00.086 Bringing machine 'default' up with 'libvirt' provider... 00:03:00.345 ==> default: Creating image (snapshot of base box volume). 00:03:00.609 ==> default: Creating domain with the following settings... 00:03:00.609 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732601564_c9232c2a349536bbd0d1 00:03:00.609 ==> default: -- Domain type: kvm 00:03:00.609 ==> default: -- Cpus: 10 00:03:00.609 ==> default: -- Feature: acpi 00:03:00.609 ==> default: -- Feature: apic 00:03:00.609 ==> default: -- Feature: pae 00:03:00.609 ==> default: -- Memory: 12288M 00:03:00.609 ==> default: -- Memory Backing: hugepages: 00:03:00.609 ==> default: -- Management MAC: 00:03:00.609 ==> default: -- Loader: 00:03:00.609 ==> default: -- Nvram: 00:03:00.609 ==> default: -- Base box: spdk/fedora39 00:03:00.609 ==> default: -- Storage pool: default 00:03:00.609 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732601564_c9232c2a349536bbd0d1.img (20G) 00:03:00.609 ==> default: -- Volume Cache: default 00:03:00.609 ==> default: -- Kernel: 00:03:00.609 ==> default: -- Initrd: 00:03:00.609 ==> default: -- Graphics Type: vnc 00:03:00.609 ==> default: -- Graphics Port: -1 00:03:00.609 ==> default: -- Graphics IP: 127.0.0.1 00:03:00.609 ==> default: -- Graphics Password: Not defined 00:03:00.609 ==> default: -- Video Type: cirrus 00:03:00.609 ==> default: -- Video VRAM: 9216 00:03:00.609 ==> default: -- Sound Type: 00:03:00.609 ==> default: -- Keymap: en-us 00:03:00.609 ==> default: -- TPM Path: 00:03:00.609 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:00.609 ==> default: -- Command line args: 00:03:00.609 ==> default: -> value=-device, 00:03:00.609 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:00.609 ==> default: -> value=-drive, 00:03:00.609 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:03:00.609 ==> default: -> value=-device, 00:03:00.609 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:00.609 ==> default: -> value=-device, 00:03:00.609 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:00.609 ==> default: -> value=-drive, 00:03:00.609 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:03:00.609 ==> default: -> value=-device, 00:03:00.609 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:00.609 ==> default: -> value=-drive, 00:03:00.609 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:03:00.609 ==> default: -> value=-device, 00:03:00.609 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:00.609 ==> default: -> value=-drive, 00:03:00.609 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:03:00.609 ==> default: -> value=-device, 00:03:00.609 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:00.609 ==> default: Creating shared folders metadata... 00:03:00.609 ==> default: Starting domain. 00:03:01.989 ==> default: Waiting for domain to get an IP address... 00:03:20.080 ==> default: Waiting for SSH to become available... 00:03:20.080 ==> default: Configuring and enabling network interfaces... 00:03:25.358 default: SSH address: 192.168.121.156:22 00:03:25.358 default: SSH username: vagrant 00:03:25.358 default: SSH auth method: private key 00:03:27.932 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:36.059 ==> default: Mounting SSHFS shared folder... 00:03:38.588 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:38.588 ==> default: Checking Mount.. 00:03:39.965 ==> default: Folder Successfully Mounted! 00:03:39.965 ==> default: Running provisioner: file... 00:03:41.346 default: ~/.gitconfig => .gitconfig 00:03:41.914 00:03:41.914 SUCCESS! 00:03:41.914 00:03:41.914 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:03:41.914 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:41.914 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:03:41.914 00:03:41.925 [Pipeline] } 00:03:41.944 [Pipeline] // stage 00:03:41.956 [Pipeline] dir 00:03:41.957 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:03:41.960 [Pipeline] { 00:03:41.977 [Pipeline] catchError 00:03:41.979 [Pipeline] { 00:03:41.995 [Pipeline] sh 00:03:42.279 + vagrant ssh-config --host vagrant 00:03:42.279 + sed -ne /^Host/,$p 00:03:42.279 + tee ssh_conf 00:03:45.572 Host vagrant 00:03:45.572 HostName 192.168.121.156 00:03:45.572 User vagrant 00:03:45.572 Port 22 00:03:45.572 UserKnownHostsFile /dev/null 00:03:45.572 StrictHostKeyChecking no 00:03:45.572 PasswordAuthentication no 00:03:45.572 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:45.572 IdentitiesOnly yes 00:03:45.572 LogLevel FATAL 00:03:45.572 ForwardAgent yes 00:03:45.572 ForwardX11 yes 00:03:45.572 00:03:45.586 [Pipeline] withEnv 00:03:45.588 [Pipeline] { 00:03:45.601 [Pipeline] sh 00:03:45.888 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:45.888 source /etc/os-release 00:03:45.888 [[ -e /image.version ]] && img=$(< /image.version) 00:03:45.888 # Minimal, systemd-like check. 00:03:45.888 if [[ -e /.dockerenv ]]; then 00:03:45.888 # Clear garbage from the node's name: 00:03:45.888 # agt-er_autotest_547-896 -> autotest_547-896 00:03:45.888 # $HOSTNAME is the actual container id 00:03:45.888 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:45.888 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:45.888 # We can assume this is a mount from a host where container is running, 00:03:45.888 # so fetch its hostname to easily identify the target swarm worker. 00:03:45.888 container="$(< /etc/hostname) ($agent)" 00:03:45.888 else 00:03:45.888 # Fallback 00:03:45.888 container=$agent 00:03:45.888 fi 00:03:45.888 fi 00:03:45.888 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:45.888 00:03:46.160 [Pipeline] } 00:03:46.177 [Pipeline] // withEnv 00:03:46.184 [Pipeline] setCustomBuildProperty 00:03:46.198 [Pipeline] stage 00:03:46.200 [Pipeline] { (Tests) 00:03:46.217 [Pipeline] sh 00:03:46.586 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:46.857 [Pipeline] sh 00:03:47.134 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:47.405 [Pipeline] timeout 00:03:47.406 Timeout set to expire in 1 hr 30 min 00:03:47.407 [Pipeline] { 00:03:47.421 [Pipeline] sh 00:03:47.701 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:48.274 HEAD is now at 8afd1c921 blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:03:48.286 [Pipeline] sh 00:03:48.566 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:48.840 [Pipeline] sh 00:03:49.123 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:49.399 [Pipeline] sh 00:03:49.683 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:49.943 ++ readlink -f spdk_repo 00:03:49.943 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:49.943 + [[ -n /home/vagrant/spdk_repo ]] 00:03:49.943 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:49.943 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:49.943 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:49.943 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:49.943 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:49.943 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:49.943 + cd /home/vagrant/spdk_repo 00:03:49.943 + source /etc/os-release 00:03:49.943 ++ NAME='Fedora Linux' 00:03:49.943 ++ VERSION='39 (Cloud Edition)' 00:03:49.943 ++ ID=fedora 00:03:49.943 ++ VERSION_ID=39 00:03:49.943 ++ VERSION_CODENAME= 00:03:49.943 ++ PLATFORM_ID=platform:f39 00:03:49.943 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:49.943 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:49.943 ++ LOGO=fedora-logo-icon 00:03:49.943 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:49.943 ++ HOME_URL=https://fedoraproject.org/ 00:03:49.943 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:49.943 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:49.943 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:49.943 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:49.943 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:49.943 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:49.943 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:49.943 ++ SUPPORT_END=2024-11-12 00:03:49.943 ++ VARIANT='Cloud Edition' 00:03:49.943 ++ VARIANT_ID=cloud 00:03:49.943 + uname -a 00:03:49.943 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:49.943 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:50.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.512 Hugepages 00:03:50.512 node hugesize free / total 00:03:50.512 node0 1048576kB 0 / 0 00:03:50.512 node0 2048kB 0 / 0 00:03:50.512 00:03:50.512 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.512 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:50.512 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:50.512 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:50.512 + rm -f /tmp/spdk-ld-path 00:03:50.512 + source autorun-spdk.conf 00:03:50.512 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:50.512 ++ SPDK_RUN_ASAN=1 00:03:50.512 ++ SPDK_RUN_UBSAN=1 00:03:50.512 ++ SPDK_TEST_RAID=1 00:03:50.512 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:50.512 ++ RUN_NIGHTLY=0 00:03:50.512 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:50.512 + [[ -n '' ]] 00:03:50.512 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:50.773 + for M in /var/spdk/build-*-manifest.txt 00:03:50.773 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:50.773 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:50.773 + for M in /var/spdk/build-*-manifest.txt 00:03:50.773 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:50.773 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:50.773 + for M in /var/spdk/build-*-manifest.txt 00:03:50.773 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:50.773 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:50.773 ++ uname 00:03:50.773 + [[ Linux == \L\i\n\u\x ]] 00:03:50.773 + sudo dmesg -T 00:03:50.773 + sudo dmesg --clear 00:03:50.773 + dmesg_pid=5424 00:03:50.773 + sudo dmesg -Tw 00:03:50.773 + [[ Fedora Linux == FreeBSD ]] 00:03:50.773 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:50.773 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:50.773 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:50.773 + [[ -x /usr/src/fio-static/fio ]] 00:03:50.773 + export FIO_BIN=/usr/src/fio-static/fio 00:03:50.773 + FIO_BIN=/usr/src/fio-static/fio 00:03:50.773 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:50.773 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:50.773 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:50.773 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:50.773 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:50.773 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:50.773 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:50.773 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:50.773 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:50.773 06:13:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:50.773 06:13:34 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:50.773 06:13:34 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:50.773 06:13:34 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:50.773 06:13:34 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:50.773 06:13:34 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:50.773 06:13:34 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:50.773 06:13:34 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:50.773 06:13:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:50.773 06:13:34 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:51.035 06:13:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:51.035 06:13:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.035 06:13:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:51.035 06:13:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:51.035 06:13:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.035 06:13:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.035 06:13:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.035 06:13:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.035 06:13:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.035 06:13:34 -- paths/export.sh@5 -- $ export PATH 00:03:51.035 06:13:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.035 06:13:34 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:51.035 06:13:34 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:51.035 06:13:34 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732601614.XXXXXX 00:03:51.035 06:13:34 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732601614.GUnBmJ 00:03:51.035 06:13:34 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:51.035 06:13:34 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:51.035 06:13:34 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:51.035 06:13:34 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:51.035 06:13:34 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:51.035 06:13:34 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:51.035 06:13:34 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:51.035 06:13:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.035 06:13:34 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:51.035 06:13:34 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:51.035 06:13:34 -- pm/common@17 -- $ local monitor 00:03:51.035 06:13:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.035 06:13:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.035 06:13:34 -- pm/common@25 -- $ sleep 1 00:03:51.035 06:13:34 -- pm/common@21 -- $ date +%s 00:03:51.035 06:13:34 -- pm/common@21 -- $ date +%s 00:03:51.035 06:13:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732601614 00:03:51.035 06:13:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732601614 00:03:51.035 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732601614_collect-vmstat.pm.log 00:03:51.035 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732601614_collect-cpu-load.pm.log 00:03:51.972 06:13:35 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:51.972 06:13:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:51.972 06:13:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:51.972 06:13:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:51.972 06:13:35 -- spdk/autobuild.sh@16 -- $ date -u 00:03:51.972 Tue Nov 26 06:13:35 AM UTC 2024 00:03:51.972 06:13:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:51.972 v25.01-pre-239-g8afd1c921 00:03:51.972 06:13:36 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:51.972 06:13:36 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:51.972 06:13:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:51.972 06:13:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:51.972 06:13:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.972 ************************************ 00:03:51.972 START TEST asan 00:03:51.972 ************************************ 00:03:51.972 06:13:36 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:51.972 using asan 00:03:51.972 00:03:51.972 real 0m0.000s 00:03:51.972 user 0m0.000s 00:03:51.972 sys 0m0.000s 00:03:51.972 06:13:36 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:51.972 06:13:36 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:51.972 ************************************ 00:03:51.972 END TEST asan 00:03:51.972 ************************************ 00:03:51.972 06:13:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:51.972 06:13:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:51.972 06:13:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:51.972 06:13:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:51.972 06:13:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:51.972 ************************************ 00:03:51.972 START TEST ubsan 00:03:51.972 ************************************ 00:03:51.973 using ubsan 00:03:51.973 06:13:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:51.973 00:03:51.973 real 0m0.000s 00:03:51.973 user 0m0.000s 00:03:51.973 sys 0m0.000s 00:03:51.973 06:13:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:51.973 06:13:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:51.973 ************************************ 00:03:51.973 END TEST ubsan 00:03:51.973 ************************************ 00:03:52.231 06:13:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:52.231 06:13:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:52.231 06:13:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:52.231 06:13:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:52.231 06:13:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:52.231 06:13:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:52.231 06:13:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:52.231 06:13:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:52.231 06:13:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:52.231 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:52.231 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:52.798 Using 'verbs' RDMA provider 00:04:11.874 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:26.769 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:26.769 Creating mk/config.mk...done. 00:04:26.769 Creating mk/cc.flags.mk...done. 00:04:26.769 Type 'make' to build. 00:04:26.769 06:14:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:26.769 06:14:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:26.769 06:14:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:26.769 06:14:09 -- common/autotest_common.sh@10 -- $ set +x 00:04:26.769 ************************************ 00:04:26.769 START TEST make 00:04:26.769 ************************************ 00:04:26.769 06:14:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:26.769 make[1]: Nothing to be done for 'all'. 00:04:38.978 The Meson build system 00:04:38.978 Version: 1.5.0 00:04:38.978 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:38.978 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:38.978 Build type: native build 00:04:38.978 Program cat found: YES (/usr/bin/cat) 00:04:38.978 Project name: DPDK 00:04:38.978 Project version: 24.03.0 00:04:38.978 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:38.978 C linker for the host machine: cc ld.bfd 2.40-14 00:04:38.978 Host machine cpu family: x86_64 00:04:38.978 Host machine cpu: x86_64 00:04:38.978 Message: ## Building in Developer Mode ## 00:04:38.978 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:38.978 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:38.978 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:38.978 Program python3 found: YES (/usr/bin/python3) 00:04:38.978 Program cat found: YES (/usr/bin/cat) 00:04:38.978 Compiler for C supports arguments -march=native: YES 00:04:38.978 Checking for size of "void *" : 8 00:04:38.978 Checking for size of "void *" : 8 (cached) 00:04:38.978 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:38.978 Library m found: YES 00:04:38.978 Library numa found: YES 00:04:38.978 Has header "numaif.h" : YES 00:04:38.978 Library fdt found: NO 00:04:38.978 Library execinfo found: NO 00:04:38.978 Has header "execinfo.h" : YES 00:04:38.978 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:38.978 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:38.978 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:38.978 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:38.978 Run-time dependency openssl found: YES 3.1.1 00:04:38.978 Run-time dependency libpcap found: YES 1.10.4 00:04:38.978 Has header "pcap.h" with dependency libpcap: YES 00:04:38.978 Compiler for C supports arguments -Wcast-qual: YES 00:04:38.978 Compiler for C supports arguments -Wdeprecated: YES 00:04:38.978 Compiler for C supports arguments -Wformat: YES 00:04:38.978 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:38.978 Compiler for C supports arguments -Wformat-security: NO 00:04:38.978 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:38.978 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:38.978 Compiler for C supports arguments -Wnested-externs: YES 00:04:38.978 Compiler for C supports arguments -Wold-style-definition: YES 00:04:38.978 Compiler for C supports arguments -Wpointer-arith: YES 00:04:38.978 Compiler for C supports arguments -Wsign-compare: YES 00:04:38.978 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:38.978 Compiler for C supports arguments -Wundef: YES 00:04:38.978 Compiler for C supports arguments -Wwrite-strings: YES 00:04:38.978 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:38.978 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:38.978 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:38.978 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:38.978 Program objdump found: YES (/usr/bin/objdump) 00:04:38.978 Compiler for C supports arguments -mavx512f: YES 00:04:38.978 Checking if "AVX512 checking" compiles: YES 00:04:38.978 Fetching value of define "__SSE4_2__" : 1 00:04:38.978 Fetching value of define "__AES__" : 1 00:04:38.978 Fetching value of define "__AVX__" : 1 00:04:38.978 Fetching value of define "__AVX2__" : 1 00:04:38.978 Fetching value of define "__AVX512BW__" : 1 00:04:38.978 Fetching value of define "__AVX512CD__" : 1 00:04:38.978 Fetching value of define "__AVX512DQ__" : 1 00:04:38.978 Fetching value of define "__AVX512F__" : 1 00:04:38.978 Fetching value of define "__AVX512VL__" : 1 00:04:38.978 Fetching value of define "__PCLMUL__" : 1 00:04:38.978 Fetching value of define "__RDRND__" : 1 00:04:38.979 Fetching value of define "__RDSEED__" : 1 00:04:38.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:38.979 Fetching value of define "__znver1__" : (undefined) 00:04:38.979 Fetching value of define "__znver2__" : (undefined) 00:04:38.979 Fetching value of define "__znver3__" : (undefined) 00:04:38.979 Fetching value of define "__znver4__" : (undefined) 00:04:38.979 Library asan found: YES 00:04:38.979 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:38.979 Message: lib/log: Defining dependency "log" 00:04:38.979 Message: lib/kvargs: Defining dependency "kvargs" 00:04:38.979 Message: lib/telemetry: Defining dependency "telemetry" 00:04:38.979 Library rt found: YES 00:04:38.979 Checking for function "getentropy" : NO 00:04:38.979 Message: lib/eal: Defining dependency "eal" 00:04:38.979 Message: lib/ring: Defining dependency "ring" 00:04:38.979 Message: lib/rcu: Defining dependency "rcu" 00:04:38.979 Message: lib/mempool: Defining dependency "mempool" 00:04:38.979 Message: lib/mbuf: Defining dependency "mbuf" 00:04:38.979 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:38.979 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:38.979 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:38.979 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:38.979 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:38.979 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:38.979 Compiler for C supports arguments -mpclmul: YES 00:04:38.979 Compiler for C supports arguments -maes: YES 00:04:38.979 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:38.979 Compiler for C supports arguments -mavx512bw: YES 00:04:38.979 Compiler for C supports arguments -mavx512dq: YES 00:04:38.979 Compiler for C supports arguments -mavx512vl: YES 00:04:38.979 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:38.979 Compiler for C supports arguments -mavx2: YES 00:04:38.979 Compiler for C supports arguments -mavx: YES 00:04:38.979 Message: lib/net: Defining dependency "net" 00:04:38.979 Message: lib/meter: Defining dependency "meter" 00:04:38.979 Message: lib/ethdev: Defining dependency "ethdev" 00:04:38.979 Message: lib/pci: Defining dependency "pci" 00:04:38.979 Message: lib/cmdline: Defining dependency "cmdline" 00:04:38.979 Message: lib/hash: Defining dependency "hash" 00:04:38.979 Message: lib/timer: Defining dependency "timer" 00:04:38.979 Message: lib/compressdev: Defining dependency "compressdev" 00:04:38.979 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:38.979 Message: lib/dmadev: Defining dependency "dmadev" 00:04:38.979 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:38.979 Message: lib/power: Defining dependency "power" 00:04:38.979 Message: lib/reorder: Defining dependency "reorder" 00:04:38.979 Message: lib/security: Defining dependency "security" 00:04:38.979 Has header "linux/userfaultfd.h" : YES 00:04:38.979 Has header "linux/vduse.h" : YES 00:04:38.979 Message: lib/vhost: Defining dependency "vhost" 00:04:38.979 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:38.979 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:38.979 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:38.979 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:38.979 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:38.979 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:38.979 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:38.979 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:38.979 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:38.979 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:38.979 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:38.979 Configuring doxy-api-html.conf using configuration 00:04:38.979 Configuring doxy-api-man.conf using configuration 00:04:38.979 Program mandb found: YES (/usr/bin/mandb) 00:04:38.979 Program sphinx-build found: NO 00:04:38.979 Configuring rte_build_config.h using configuration 00:04:38.979 Message: 00:04:38.979 ================= 00:04:38.979 Applications Enabled 00:04:38.979 ================= 00:04:38.979 00:04:38.979 apps: 00:04:38.979 00:04:38.979 00:04:38.979 Message: 00:04:38.979 ================= 00:04:38.979 Libraries Enabled 00:04:38.979 ================= 00:04:38.979 00:04:38.979 libs: 00:04:38.979 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:38.979 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:38.979 cryptodev, dmadev, power, reorder, security, vhost, 00:04:38.979 00:04:38.979 Message: 00:04:38.979 =============== 00:04:38.979 Drivers Enabled 00:04:38.979 =============== 00:04:38.979 00:04:38.979 common: 00:04:38.979 00:04:38.979 bus: 00:04:38.979 pci, vdev, 00:04:38.979 mempool: 00:04:38.979 ring, 00:04:38.979 dma: 00:04:38.979 00:04:38.979 net: 00:04:38.979 00:04:38.979 crypto: 00:04:38.979 00:04:38.979 compress: 00:04:38.979 00:04:38.979 vdpa: 00:04:38.979 00:04:38.979 00:04:38.979 Message: 00:04:38.979 ================= 00:04:38.979 Content Skipped 00:04:38.979 ================= 00:04:38.979 00:04:38.979 apps: 00:04:38.979 dumpcap: explicitly disabled via build config 00:04:38.979 graph: explicitly disabled via build config 00:04:38.979 pdump: explicitly disabled via build config 00:04:38.979 proc-info: explicitly disabled via build config 00:04:38.979 test-acl: explicitly disabled via build config 00:04:38.979 test-bbdev: explicitly disabled via build config 00:04:38.979 test-cmdline: explicitly disabled via build config 00:04:38.979 test-compress-perf: explicitly disabled via build config 00:04:38.979 test-crypto-perf: explicitly disabled via build config 00:04:38.979 test-dma-perf: explicitly disabled via build config 00:04:38.979 test-eventdev: explicitly disabled via build config 00:04:38.979 test-fib: explicitly disabled via build config 00:04:38.979 test-flow-perf: explicitly disabled via build config 00:04:38.979 test-gpudev: explicitly disabled via build config 00:04:38.979 test-mldev: explicitly disabled via build config 00:04:38.979 test-pipeline: explicitly disabled via build config 00:04:38.979 test-pmd: explicitly disabled via build config 00:04:38.979 test-regex: explicitly disabled via build config 00:04:38.979 test-sad: explicitly disabled via build config 00:04:38.979 test-security-perf: explicitly disabled via build config 00:04:38.979 00:04:38.979 libs: 00:04:38.979 argparse: explicitly disabled via build config 00:04:38.979 metrics: explicitly disabled via build config 00:04:38.979 acl: explicitly disabled via build config 00:04:38.979 bbdev: explicitly disabled via build config 00:04:38.979 bitratestats: explicitly disabled via build config 00:04:38.979 bpf: explicitly disabled via build config 00:04:38.979 cfgfile: explicitly disabled via build config 00:04:38.979 distributor: explicitly disabled via build config 00:04:38.979 efd: explicitly disabled via build config 00:04:38.979 eventdev: explicitly disabled via build config 00:04:38.979 dispatcher: explicitly disabled via build config 00:04:38.979 gpudev: explicitly disabled via build config 00:04:38.979 gro: explicitly disabled via build config 00:04:38.979 gso: explicitly disabled via build config 00:04:38.979 ip_frag: explicitly disabled via build config 00:04:38.979 jobstats: explicitly disabled via build config 00:04:38.979 latencystats: explicitly disabled via build config 00:04:38.979 lpm: explicitly disabled via build config 00:04:38.979 member: explicitly disabled via build config 00:04:38.979 pcapng: explicitly disabled via build config 00:04:38.979 rawdev: explicitly disabled via build config 00:04:38.979 regexdev: explicitly disabled via build config 00:04:38.979 mldev: explicitly disabled via build config 00:04:38.979 rib: explicitly disabled via build config 00:04:38.979 sched: explicitly disabled via build config 00:04:38.979 stack: explicitly disabled via build config 00:04:38.979 ipsec: explicitly disabled via build config 00:04:38.979 pdcp: explicitly disabled via build config 00:04:38.979 fib: explicitly disabled via build config 00:04:38.979 port: explicitly disabled via build config 00:04:38.979 pdump: explicitly disabled via build config 00:04:38.979 table: explicitly disabled via build config 00:04:38.979 pipeline: explicitly disabled via build config 00:04:38.979 graph: explicitly disabled via build config 00:04:38.979 node: explicitly disabled via build config 00:04:38.979 00:04:38.979 drivers: 00:04:38.979 common/cpt: not in enabled drivers build config 00:04:38.979 common/dpaax: not in enabled drivers build config 00:04:38.979 common/iavf: not in enabled drivers build config 00:04:38.979 common/idpf: not in enabled drivers build config 00:04:38.979 common/ionic: not in enabled drivers build config 00:04:38.979 common/mvep: not in enabled drivers build config 00:04:38.979 common/octeontx: not in enabled drivers build config 00:04:38.979 bus/auxiliary: not in enabled drivers build config 00:04:38.979 bus/cdx: not in enabled drivers build config 00:04:38.979 bus/dpaa: not in enabled drivers build config 00:04:38.979 bus/fslmc: not in enabled drivers build config 00:04:38.979 bus/ifpga: not in enabled drivers build config 00:04:38.979 bus/platform: not in enabled drivers build config 00:04:38.979 bus/uacce: not in enabled drivers build config 00:04:38.979 bus/vmbus: not in enabled drivers build config 00:04:38.979 common/cnxk: not in enabled drivers build config 00:04:38.979 common/mlx5: not in enabled drivers build config 00:04:38.979 common/nfp: not in enabled drivers build config 00:04:38.979 common/nitrox: not in enabled drivers build config 00:04:38.979 common/qat: not in enabled drivers build config 00:04:38.979 common/sfc_efx: not in enabled drivers build config 00:04:38.979 mempool/bucket: not in enabled drivers build config 00:04:38.979 mempool/cnxk: not in enabled drivers build config 00:04:38.979 mempool/dpaa: not in enabled drivers build config 00:04:38.979 mempool/dpaa2: not in enabled drivers build config 00:04:38.980 mempool/octeontx: not in enabled drivers build config 00:04:38.980 mempool/stack: not in enabled drivers build config 00:04:38.980 dma/cnxk: not in enabled drivers build config 00:04:38.980 dma/dpaa: not in enabled drivers build config 00:04:38.980 dma/dpaa2: not in enabled drivers build config 00:04:38.980 dma/hisilicon: not in enabled drivers build config 00:04:38.980 dma/idxd: not in enabled drivers build config 00:04:38.980 dma/ioat: not in enabled drivers build config 00:04:38.980 dma/skeleton: not in enabled drivers build config 00:04:38.980 net/af_packet: not in enabled drivers build config 00:04:38.980 net/af_xdp: not in enabled drivers build config 00:04:38.980 net/ark: not in enabled drivers build config 00:04:38.980 net/atlantic: not in enabled drivers build config 00:04:38.980 net/avp: not in enabled drivers build config 00:04:38.980 net/axgbe: not in enabled drivers build config 00:04:38.980 net/bnx2x: not in enabled drivers build config 00:04:38.980 net/bnxt: not in enabled drivers build config 00:04:38.980 net/bonding: not in enabled drivers build config 00:04:38.980 net/cnxk: not in enabled drivers build config 00:04:38.980 net/cpfl: not in enabled drivers build config 00:04:38.980 net/cxgbe: not in enabled drivers build config 00:04:38.980 net/dpaa: not in enabled drivers build config 00:04:38.980 net/dpaa2: not in enabled drivers build config 00:04:38.980 net/e1000: not in enabled drivers build config 00:04:38.980 net/ena: not in enabled drivers build config 00:04:38.980 net/enetc: not in enabled drivers build config 00:04:38.980 net/enetfec: not in enabled drivers build config 00:04:38.980 net/enic: not in enabled drivers build config 00:04:38.980 net/failsafe: not in enabled drivers build config 00:04:38.980 net/fm10k: not in enabled drivers build config 00:04:38.980 net/gve: not in enabled drivers build config 00:04:38.980 net/hinic: not in enabled drivers build config 00:04:38.980 net/hns3: not in enabled drivers build config 00:04:38.980 net/i40e: not in enabled drivers build config 00:04:38.980 net/iavf: not in enabled drivers build config 00:04:38.980 net/ice: not in enabled drivers build config 00:04:38.980 net/idpf: not in enabled drivers build config 00:04:38.980 net/igc: not in enabled drivers build config 00:04:38.980 net/ionic: not in enabled drivers build config 00:04:38.980 net/ipn3ke: not in enabled drivers build config 00:04:38.980 net/ixgbe: not in enabled drivers build config 00:04:38.980 net/mana: not in enabled drivers build config 00:04:38.980 net/memif: not in enabled drivers build config 00:04:38.980 net/mlx4: not in enabled drivers build config 00:04:38.980 net/mlx5: not in enabled drivers build config 00:04:38.980 net/mvneta: not in enabled drivers build config 00:04:38.980 net/mvpp2: not in enabled drivers build config 00:04:38.980 net/netvsc: not in enabled drivers build config 00:04:38.980 net/nfb: not in enabled drivers build config 00:04:38.980 net/nfp: not in enabled drivers build config 00:04:38.980 net/ngbe: not in enabled drivers build config 00:04:38.980 net/null: not in enabled drivers build config 00:04:38.980 net/octeontx: not in enabled drivers build config 00:04:38.980 net/octeon_ep: not in enabled drivers build config 00:04:38.980 net/pcap: not in enabled drivers build config 00:04:38.980 net/pfe: not in enabled drivers build config 00:04:38.980 net/qede: not in enabled drivers build config 00:04:38.980 net/ring: not in enabled drivers build config 00:04:38.980 net/sfc: not in enabled drivers build config 00:04:38.980 net/softnic: not in enabled drivers build config 00:04:38.980 net/tap: not in enabled drivers build config 00:04:38.980 net/thunderx: not in enabled drivers build config 00:04:38.980 net/txgbe: not in enabled drivers build config 00:04:38.980 net/vdev_netvsc: not in enabled drivers build config 00:04:38.980 net/vhost: not in enabled drivers build config 00:04:38.980 net/virtio: not in enabled drivers build config 00:04:38.980 net/vmxnet3: not in enabled drivers build config 00:04:38.980 raw/*: missing internal dependency, "rawdev" 00:04:38.980 crypto/armv8: not in enabled drivers build config 00:04:38.980 crypto/bcmfs: not in enabled drivers build config 00:04:38.980 crypto/caam_jr: not in enabled drivers build config 00:04:38.980 crypto/ccp: not in enabled drivers build config 00:04:38.980 crypto/cnxk: not in enabled drivers build config 00:04:38.980 crypto/dpaa_sec: not in enabled drivers build config 00:04:38.980 crypto/dpaa2_sec: not in enabled drivers build config 00:04:38.980 crypto/ipsec_mb: not in enabled drivers build config 00:04:38.980 crypto/mlx5: not in enabled drivers build config 00:04:38.980 crypto/mvsam: not in enabled drivers build config 00:04:38.980 crypto/nitrox: not in enabled drivers build config 00:04:38.980 crypto/null: not in enabled drivers build config 00:04:38.980 crypto/octeontx: not in enabled drivers build config 00:04:38.980 crypto/openssl: not in enabled drivers build config 00:04:38.980 crypto/scheduler: not in enabled drivers build config 00:04:38.980 crypto/uadk: not in enabled drivers build config 00:04:38.980 crypto/virtio: not in enabled drivers build config 00:04:38.980 compress/isal: not in enabled drivers build config 00:04:38.980 compress/mlx5: not in enabled drivers build config 00:04:38.980 compress/nitrox: not in enabled drivers build config 00:04:38.980 compress/octeontx: not in enabled drivers build config 00:04:38.980 compress/zlib: not in enabled drivers build config 00:04:38.980 regex/*: missing internal dependency, "regexdev" 00:04:38.980 ml/*: missing internal dependency, "mldev" 00:04:38.980 vdpa/ifc: not in enabled drivers build config 00:04:38.980 vdpa/mlx5: not in enabled drivers build config 00:04:38.980 vdpa/nfp: not in enabled drivers build config 00:04:38.980 vdpa/sfc: not in enabled drivers build config 00:04:38.980 event/*: missing internal dependency, "eventdev" 00:04:38.980 baseband/*: missing internal dependency, "bbdev" 00:04:38.980 gpu/*: missing internal dependency, "gpudev" 00:04:38.980 00:04:38.980 00:04:39.548 Build targets in project: 85 00:04:39.548 00:04:39.548 DPDK 24.03.0 00:04:39.548 00:04:39.548 User defined options 00:04:39.548 buildtype : debug 00:04:39.548 default_library : shared 00:04:39.548 libdir : lib 00:04:39.548 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:39.548 b_sanitize : address 00:04:39.548 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:39.548 c_link_args : 00:04:39.548 cpu_instruction_set: native 00:04:39.548 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:39.548 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:39.548 enable_docs : false 00:04:39.548 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:39.548 enable_kmods : false 00:04:39.548 max_lcores : 128 00:04:39.548 tests : false 00:04:39.548 00:04:39.548 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:40.115 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:40.115 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:40.115 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:40.115 [3/268] Linking static target lib/librte_kvargs.a 00:04:40.115 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:40.115 [5/268] Linking static target lib/librte_log.a 00:04:40.373 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:40.632 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:40.891 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:40.891 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:40.891 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:40.891 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:40.891 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:40.891 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:40.891 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:40.891 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:40.891 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:40.891 [17/268] Linking static target lib/librte_telemetry.a 00:04:40.891 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:41.457 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.457 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:41.457 [21/268] Linking target lib/librte_log.so.24.1 00:04:41.457 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:41.457 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:41.457 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:41.715 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:41.715 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:41.715 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:41.715 [28/268] Linking target lib/librte_kvargs.so.24.1 00:04:41.715 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:41.715 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:41.974 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:41.974 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:41.974 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:41.974 [34/268] Linking target lib/librte_telemetry.so.24.1 00:04:41.974 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:42.233 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:42.233 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:42.491 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:42.491 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:42.491 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:42.491 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:42.491 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:42.491 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:42.491 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:42.750 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:42.750 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:43.037 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:43.037 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:43.037 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:43.037 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:43.295 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:43.295 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:43.295 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:43.295 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:43.554 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:43.554 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:43.554 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:43.814 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:43.814 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:43.814 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:43.814 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:43.814 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:43.814 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:43.814 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:43.814 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:44.073 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:44.332 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:44.332 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:44.332 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:44.332 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:44.332 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:44.332 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:44.332 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:44.591 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:44.592 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:44.592 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:44.592 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:44.592 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:44.851 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:44.851 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:44.851 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:44.851 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:45.110 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:45.110 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:45.110 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:45.110 [86/268] Linking static target lib/librte_ring.a 00:04:45.110 [87/268] Linking static target lib/librte_eal.a 00:04:45.368 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:45.368 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:45.368 [90/268] Linking static target lib/librte_rcu.a 00:04:45.368 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:45.368 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:45.627 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:45.627 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:45.627 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:45.627 [96/268] Linking static target lib/librte_mempool.a 00:04:45.627 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.886 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:45.886 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:45.886 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:46.146 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:46.146 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:46.146 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:46.405 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:46.405 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:46.405 [106/268] Linking static target lib/librte_mbuf.a 00:04:46.405 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:46.405 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:46.405 [109/268] Linking static target lib/librte_net.a 00:04:46.405 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:46.405 [111/268] Linking static target lib/librte_meter.a 00:04:46.664 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:46.664 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:46.664 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:46.924 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.924 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:46.924 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:46.924 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.492 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:47.492 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:47.750 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:47.750 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:47.750 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:48.124 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:48.124 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:48.124 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:48.124 [127/268] Linking static target lib/librte_pci.a 00:04:48.124 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:48.383 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:48.383 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:48.383 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:48.383 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:48.383 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:48.383 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:48.383 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:48.643 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:48.643 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:48.643 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:48.643 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:48.643 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:48.643 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:48.643 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:48.643 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:48.643 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:48.900 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:48.900 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:48.900 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:48.900 [148/268] Linking static target lib/librte_cmdline.a 00:04:49.158 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:49.158 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:49.416 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:49.416 [152/268] Linking static target lib/librte_timer.a 00:04:49.416 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:49.416 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:49.416 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:49.675 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:49.935 [157/268] Linking static target lib/librte_ethdev.a 00:04:49.935 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:49.935 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:49.935 [160/268] Linking static target lib/librte_compressdev.a 00:04:50.194 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:50.194 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:50.194 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:50.194 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:50.194 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.194 [166/268] Linking static target lib/librte_hash.a 00:04:50.452 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:50.452 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:50.452 [169/268] Linking static target lib/librte_dmadev.a 00:04:50.711 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:50.711 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:50.711 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:50.711 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:50.968 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:50.968 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.226 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:51.226 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:51.226 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.484 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:51.484 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:51.484 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:51.484 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:51.742 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:51.742 [184/268] Linking static target lib/librte_cryptodev.a 00:04:51.742 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:51.742 [186/268] Linking static target lib/librte_power.a 00:04:52.001 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:52.259 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:52.259 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:52.260 [190/268] Linking static target lib/librte_security.a 00:04:52.519 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:52.519 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:52.519 [193/268] Linking static target lib/librte_reorder.a 00:04:52.778 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:53.036 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.036 [196/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.036 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:53.295 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:53.553 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:53.553 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:53.553 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:53.553 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:53.811 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:54.085 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:54.085 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:54.085 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:54.085 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:54.085 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:54.348 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.348 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:54.348 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:54.348 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:54.683 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:54.683 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:54.683 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:54.683 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:54.683 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:54.683 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:54.683 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:54.683 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:54.683 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:54.942 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:54.942 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:55.199 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:55.199 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:55.199 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:55.199 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:56.575 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:57.511 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:57.511 [230/268] Linking target lib/librte_eal.so.24.1 00:04:57.771 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:57.771 [232/268] Linking target lib/librte_ring.so.24.1 00:04:57.771 [233/268] Linking target lib/librte_meter.so.24.1 00:04:57.771 [234/268] Linking target lib/librte_pci.so.24.1 00:04:57.771 [235/268] Linking target lib/librte_dmadev.so.24.1 00:04:57.771 [236/268] Linking target lib/librte_timer.so.24.1 00:04:57.771 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:57.771 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:58.028 [239/268] Linking target lib/librte_rcu.so.24.1 00:04:58.028 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:58.028 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:58.028 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:58.028 [243/268] Linking target lib/librte_mempool.so.24.1 00:04:58.029 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:58.029 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:58.029 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:58.029 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:58.029 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:58.287 [249/268] Linking target lib/librte_mbuf.so.24.1 00:04:58.287 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:58.287 [251/268] Linking target lib/librte_net.so.24.1 00:04:58.287 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:58.287 [253/268] Linking target lib/librte_reorder.so.24.1 00:04:58.287 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:04:58.544 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:58.544 [256/268] Linking target lib/librte_hash.so.24.1 00:04:58.545 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:58.545 [258/268] Linking target lib/librte_cmdline.so.24.1 00:04:58.545 [259/268] Linking target lib/librte_security.so.24.1 00:04:58.803 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:59.371 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.371 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:59.629 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:59.629 [264/268] Linking target lib/librte_power.so.24.1 00:05:02.916 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:02.916 [266/268] Linking static target lib/librte_vhost.a 00:05:04.294 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.552 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:04.552 INFO: autodetecting backend as ninja 00:05:04.552 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:36.630 CC lib/log/log.o 00:05:36.630 CC lib/ut_mock/mock.o 00:05:36.630 CC lib/log/log_flags.o 00:05:36.630 CC lib/log/log_deprecated.o 00:05:36.631 CC lib/ut/ut.o 00:05:36.631 LIB libspdk_ut.a 00:05:36.631 LIB libspdk_ut_mock.a 00:05:36.631 SO libspdk_ut.so.2.0 00:05:36.631 SO libspdk_ut_mock.so.6.0 00:05:36.631 LIB libspdk_log.a 00:05:36.631 SYMLINK libspdk_ut.so 00:05:36.631 SYMLINK libspdk_ut_mock.so 00:05:36.631 SO libspdk_log.so.7.1 00:05:36.631 SYMLINK libspdk_log.so 00:05:36.631 CC lib/dma/dma.o 00:05:36.631 CXX lib/trace_parser/trace.o 00:05:36.631 CC lib/util/base64.o 00:05:36.631 CC lib/util/bit_array.o 00:05:36.631 CC lib/util/cpuset.o 00:05:36.631 CC lib/util/crc32.o 00:05:36.631 CC lib/util/crc16.o 00:05:36.631 CC lib/util/crc32c.o 00:05:36.631 CC lib/ioat/ioat.o 00:05:36.631 CC lib/vfio_user/host/vfio_user_pci.o 00:05:36.631 CC lib/vfio_user/host/vfio_user.o 00:05:36.631 CC lib/util/crc32_ieee.o 00:05:36.631 CC lib/util/crc64.o 00:05:36.631 CC lib/util/dif.o 00:05:36.631 CC lib/util/fd.o 00:05:36.631 CC lib/util/fd_group.o 00:05:36.631 LIB libspdk_dma.a 00:05:36.631 SO libspdk_dma.so.5.0 00:05:36.631 CC lib/util/file.o 00:05:36.631 CC lib/util/hexlify.o 00:05:36.631 LIB libspdk_ioat.a 00:05:36.631 CC lib/util/iov.o 00:05:36.631 SYMLINK libspdk_dma.so 00:05:36.631 CC lib/util/math.o 00:05:36.631 SO libspdk_ioat.so.7.0 00:05:36.631 CC lib/util/net.o 00:05:36.631 SYMLINK libspdk_ioat.so 00:05:36.631 CC lib/util/pipe.o 00:05:36.631 CC lib/util/strerror_tls.o 00:05:36.631 LIB libspdk_vfio_user.a 00:05:36.631 CC lib/util/string.o 00:05:36.631 SO libspdk_vfio_user.so.5.0 00:05:36.631 CC lib/util/uuid.o 00:05:36.631 CC lib/util/xor.o 00:05:36.631 CC lib/util/zipf.o 00:05:36.631 CC lib/util/md5.o 00:05:36.631 SYMLINK libspdk_vfio_user.so 00:05:36.631 LIB libspdk_util.a 00:05:36.631 SO libspdk_util.so.10.1 00:05:36.631 SYMLINK libspdk_util.so 00:05:36.631 LIB libspdk_trace_parser.a 00:05:36.631 CC lib/env_dpdk/env.o 00:05:36.631 CC lib/env_dpdk/memory.o 00:05:36.631 CC lib/env_dpdk/pci.o 00:05:36.631 CC lib/env_dpdk/init.o 00:05:36.631 CC lib/json/json_parse.o 00:05:36.631 SO libspdk_trace_parser.so.6.0 00:05:36.631 CC lib/idxd/idxd.o 00:05:36.631 CC lib/conf/conf.o 00:05:36.631 CC lib/vmd/vmd.o 00:05:36.631 CC lib/rdma_utils/rdma_utils.o 00:05:36.631 SYMLINK libspdk_trace_parser.so 00:05:36.631 CC lib/vmd/led.o 00:05:36.631 CC lib/json/json_util.o 00:05:36.631 LIB libspdk_conf.a 00:05:36.631 CC lib/json/json_write.o 00:05:36.631 SO libspdk_conf.so.6.0 00:05:36.631 LIB libspdk_rdma_utils.a 00:05:36.631 SO libspdk_rdma_utils.so.1.0 00:05:36.631 SYMLINK libspdk_conf.so 00:05:36.631 CC lib/env_dpdk/threads.o 00:05:36.631 CC lib/env_dpdk/pci_ioat.o 00:05:36.631 SYMLINK libspdk_rdma_utils.so 00:05:36.631 CC lib/env_dpdk/pci_virtio.o 00:05:36.631 CC lib/env_dpdk/pci_vmd.o 00:05:36.631 CC lib/env_dpdk/pci_idxd.o 00:05:36.631 CC lib/env_dpdk/pci_event.o 00:05:36.631 CC lib/env_dpdk/sigbus_handler.o 00:05:36.631 CC lib/env_dpdk/pci_dpdk.o 00:05:36.631 LIB libspdk_json.a 00:05:36.631 CC lib/idxd/idxd_user.o 00:05:36.631 SO libspdk_json.so.6.0 00:05:36.631 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:36.631 CC lib/idxd/idxd_kernel.o 00:05:36.631 SYMLINK libspdk_json.so 00:05:36.631 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:36.631 CC lib/rdma_provider/common.o 00:05:36.631 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:36.631 CC lib/jsonrpc/jsonrpc_server.o 00:05:36.631 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:36.631 CC lib/jsonrpc/jsonrpc_client.o 00:05:36.631 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:36.631 LIB libspdk_rdma_provider.a 00:05:36.631 SO libspdk_rdma_provider.so.7.0 00:05:36.631 LIB libspdk_idxd.a 00:05:36.631 LIB libspdk_vmd.a 00:05:36.631 SO libspdk_idxd.so.12.1 00:05:36.631 SO libspdk_vmd.so.6.0 00:05:36.631 SYMLINK libspdk_rdma_provider.so 00:05:36.631 SYMLINK libspdk_vmd.so 00:05:36.631 SYMLINK libspdk_idxd.so 00:05:36.631 LIB libspdk_jsonrpc.a 00:05:36.631 SO libspdk_jsonrpc.so.6.0 00:05:36.631 SYMLINK libspdk_jsonrpc.so 00:05:36.631 CC lib/rpc/rpc.o 00:05:36.631 LIB libspdk_env_dpdk.a 00:05:36.631 SO libspdk_env_dpdk.so.15.1 00:05:36.631 LIB libspdk_rpc.a 00:05:36.631 SO libspdk_rpc.so.6.0 00:05:36.631 SYMLINK libspdk_rpc.so 00:05:36.631 SYMLINK libspdk_env_dpdk.so 00:05:36.891 CC lib/trace/trace.o 00:05:36.891 CC lib/trace/trace_flags.o 00:05:36.891 CC lib/trace/trace_rpc.o 00:05:36.891 CC lib/keyring/keyring.o 00:05:36.891 CC lib/keyring/keyring_rpc.o 00:05:36.891 CC lib/notify/notify.o 00:05:36.891 CC lib/notify/notify_rpc.o 00:05:37.150 LIB libspdk_notify.a 00:05:37.150 SO libspdk_notify.so.6.0 00:05:37.150 LIB libspdk_trace.a 00:05:37.150 SYMLINK libspdk_notify.so 00:05:37.150 SO libspdk_trace.so.11.0 00:05:37.150 LIB libspdk_keyring.a 00:05:37.409 SO libspdk_keyring.so.2.0 00:05:37.409 SYMLINK libspdk_trace.so 00:05:37.409 SYMLINK libspdk_keyring.so 00:05:37.669 CC lib/sock/sock.o 00:05:37.669 CC lib/sock/sock_rpc.o 00:05:37.669 CC lib/thread/iobuf.o 00:05:37.669 CC lib/thread/thread.o 00:05:38.239 LIB libspdk_sock.a 00:05:38.239 SO libspdk_sock.so.10.0 00:05:38.497 SYMLINK libspdk_sock.so 00:05:38.756 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:38.756 CC lib/nvme/nvme_ns_cmd.o 00:05:38.756 CC lib/nvme/nvme_ctrlr.o 00:05:38.756 CC lib/nvme/nvme_fabric.o 00:05:38.756 CC lib/nvme/nvme_ns.o 00:05:38.756 CC lib/nvme/nvme_qpair.o 00:05:38.756 CC lib/nvme/nvme_pcie_common.o 00:05:38.756 CC lib/nvme/nvme_pcie.o 00:05:38.756 CC lib/nvme/nvme.o 00:05:39.694 CC lib/nvme/nvme_quirks.o 00:05:39.694 CC lib/nvme/nvme_transport.o 00:05:39.694 CC lib/nvme/nvme_discovery.o 00:05:39.694 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:39.694 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:39.694 CC lib/nvme/nvme_tcp.o 00:05:39.953 CC lib/nvme/nvme_opal.o 00:05:39.953 LIB libspdk_thread.a 00:05:39.953 SO libspdk_thread.so.11.0 00:05:39.953 SYMLINK libspdk_thread.so 00:05:39.953 CC lib/nvme/nvme_io_msg.o 00:05:40.213 CC lib/nvme/nvme_poll_group.o 00:05:40.213 CC lib/nvme/nvme_zns.o 00:05:40.213 CC lib/nvme/nvme_stubs.o 00:05:40.471 CC lib/nvme/nvme_auth.o 00:05:40.471 CC lib/accel/accel.o 00:05:40.471 CC lib/blob/blobstore.o 00:05:40.731 CC lib/init/json_config.o 00:05:40.731 CC lib/init/subsystem.o 00:05:40.731 CC lib/init/subsystem_rpc.o 00:05:40.991 CC lib/init/rpc.o 00:05:40.991 CC lib/accel/accel_rpc.o 00:05:40.991 CC lib/blob/request.o 00:05:40.991 CC lib/accel/accel_sw.o 00:05:40.991 LIB libspdk_init.a 00:05:41.250 SO libspdk_init.so.6.0 00:05:41.250 CC lib/nvme/nvme_cuse.o 00:05:41.250 CC lib/virtio/virtio.o 00:05:41.250 SYMLINK libspdk_init.so 00:05:41.250 CC lib/virtio/virtio_vhost_user.o 00:05:41.509 CC lib/blob/zeroes.o 00:05:41.509 CC lib/blob/blob_bs_dev.o 00:05:41.509 CC lib/nvme/nvme_rdma.o 00:05:41.509 CC lib/virtio/virtio_vfio_user.o 00:05:41.509 CC lib/virtio/virtio_pci.o 00:05:41.768 CC lib/fsdev/fsdev.o 00:05:41.768 CC lib/fsdev/fsdev_io.o 00:05:41.768 CC lib/fsdev/fsdev_rpc.o 00:05:42.027 CC lib/event/app.o 00:05:42.027 CC lib/event/reactor.o 00:05:42.027 CC lib/event/log_rpc.o 00:05:42.027 LIB libspdk_virtio.a 00:05:42.027 LIB libspdk_accel.a 00:05:42.027 SO libspdk_virtio.so.7.0 00:05:42.027 SO libspdk_accel.so.16.0 00:05:42.027 SYMLINK libspdk_virtio.so 00:05:42.027 CC lib/event/app_rpc.o 00:05:42.285 CC lib/event/scheduler_static.o 00:05:42.285 SYMLINK libspdk_accel.so 00:05:42.285 CC lib/bdev/bdev.o 00:05:42.285 CC lib/bdev/bdev_rpc.o 00:05:42.285 CC lib/bdev/bdev_zone.o 00:05:42.543 CC lib/bdev/part.o 00:05:42.543 CC lib/bdev/scsi_nvme.o 00:05:42.543 LIB libspdk_event.a 00:05:42.543 SO libspdk_event.so.14.0 00:05:42.810 LIB libspdk_fsdev.a 00:05:42.810 SYMLINK libspdk_event.so 00:05:42.810 SO libspdk_fsdev.so.2.0 00:05:42.810 SYMLINK libspdk_fsdev.so 00:05:43.079 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:43.337 LIB libspdk_nvme.a 00:05:43.594 SO libspdk_nvme.so.15.0 00:05:43.853 SYMLINK libspdk_nvme.so 00:05:44.111 LIB libspdk_fuse_dispatcher.a 00:05:44.111 SO libspdk_fuse_dispatcher.so.1.0 00:05:44.111 SYMLINK libspdk_fuse_dispatcher.so 00:05:45.044 LIB libspdk_blob.a 00:05:45.301 SO libspdk_blob.so.12.0 00:05:45.301 SYMLINK libspdk_blob.so 00:05:45.865 CC lib/blobfs/blobfs.o 00:05:45.865 CC lib/blobfs/tree.o 00:05:45.865 CC lib/lvol/lvol.o 00:05:45.865 LIB libspdk_bdev.a 00:05:46.123 SO libspdk_bdev.so.17.0 00:05:46.123 SYMLINK libspdk_bdev.so 00:05:46.381 CC lib/ftl/ftl_layout.o 00:05:46.381 CC lib/ftl/ftl_debug.o 00:05:46.381 CC lib/ftl/ftl_init.o 00:05:46.381 CC lib/ftl/ftl_core.o 00:05:46.381 CC lib/nbd/nbd.o 00:05:46.381 CC lib/scsi/dev.o 00:05:46.381 CC lib/nvmf/ctrlr.o 00:05:46.381 CC lib/ublk/ublk.o 00:05:46.640 CC lib/nvmf/ctrlr_discovery.o 00:05:46.899 CC lib/ftl/ftl_io.o 00:05:46.899 CC lib/scsi/lun.o 00:05:46.899 LIB libspdk_blobfs.a 00:05:46.899 SO libspdk_blobfs.so.11.0 00:05:46.899 CC lib/scsi/port.o 00:05:46.899 CC lib/scsi/scsi.o 00:05:46.899 SYMLINK libspdk_blobfs.so 00:05:46.899 CC lib/ftl/ftl_sb.o 00:05:47.157 CC lib/scsi/scsi_bdev.o 00:05:47.157 CC lib/ftl/ftl_l2p.o 00:05:47.157 CC lib/scsi/scsi_pr.o 00:05:47.157 CC lib/ublk/ublk_rpc.o 00:05:47.157 CC lib/scsi/scsi_rpc.o 00:05:47.157 CC lib/nbd/nbd_rpc.o 00:05:47.415 CC lib/ftl/ftl_l2p_flat.o 00:05:47.415 CC lib/nvmf/ctrlr_bdev.o 00:05:47.415 CC lib/scsi/task.o 00:05:47.415 LIB libspdk_ublk.a 00:05:47.415 CC lib/nvmf/subsystem.o 00:05:47.415 LIB libspdk_lvol.a 00:05:47.415 SO libspdk_ublk.so.3.0 00:05:47.415 SO libspdk_lvol.so.11.0 00:05:47.415 LIB libspdk_nbd.a 00:05:47.415 SO libspdk_nbd.so.7.0 00:05:47.674 SYMLINK libspdk_lvol.so 00:05:47.674 CC lib/ftl/ftl_nv_cache.o 00:05:47.674 CC lib/ftl/ftl_band.o 00:05:47.674 SYMLINK libspdk_ublk.so 00:05:47.674 CC lib/ftl/ftl_band_ops.o 00:05:47.674 SYMLINK libspdk_nbd.so 00:05:47.674 CC lib/ftl/ftl_writer.o 00:05:47.674 CC lib/nvmf/nvmf.o 00:05:47.674 CC lib/nvmf/nvmf_rpc.o 00:05:47.674 LIB libspdk_scsi.a 00:05:47.933 SO libspdk_scsi.so.9.0 00:05:47.933 SYMLINK libspdk_scsi.so 00:05:47.933 CC lib/ftl/ftl_rq.o 00:05:47.933 CC lib/nvmf/transport.o 00:05:47.933 CC lib/ftl/ftl_reloc.o 00:05:48.191 CC lib/iscsi/conn.o 00:05:48.450 CC lib/vhost/vhost.o 00:05:48.450 CC lib/iscsi/init_grp.o 00:05:48.450 CC lib/ftl/ftl_l2p_cache.o 00:05:48.710 CC lib/vhost/vhost_rpc.o 00:05:48.711 CC lib/nvmf/tcp.o 00:05:48.711 CC lib/ftl/ftl_p2l.o 00:05:48.969 CC lib/ftl/ftl_p2l_log.o 00:05:48.969 CC lib/vhost/vhost_scsi.o 00:05:48.969 CC lib/iscsi/iscsi.o 00:05:48.969 CC lib/iscsi/param.o 00:05:49.228 CC lib/nvmf/stubs.o 00:05:49.228 CC lib/nvmf/mdns_server.o 00:05:49.228 CC lib/nvmf/rdma.o 00:05:49.229 CC lib/ftl/mngt/ftl_mngt.o 00:05:49.488 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:49.489 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:49.489 CC lib/vhost/vhost_blk.o 00:05:49.748 CC lib/vhost/rte_vhost_user.o 00:05:49.748 CC lib/iscsi/portal_grp.o 00:05:49.748 CC lib/nvmf/auth.o 00:05:49.748 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:49.748 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:50.008 CC lib/iscsi/tgt_node.o 00:05:50.008 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:50.008 CC lib/iscsi/iscsi_subsystem.o 00:05:50.008 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:50.268 CC lib/iscsi/iscsi_rpc.o 00:05:50.527 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:50.527 CC lib/iscsi/task.o 00:05:50.527 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:50.787 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:50.787 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:50.787 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:50.787 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:50.787 CC lib/ftl/utils/ftl_conf.o 00:05:50.787 LIB libspdk_iscsi.a 00:05:50.787 LIB libspdk_vhost.a 00:05:50.787 CC lib/ftl/utils/ftl_md.o 00:05:50.787 CC lib/ftl/utils/ftl_mempool.o 00:05:50.787 CC lib/ftl/utils/ftl_bitmap.o 00:05:51.046 CC lib/ftl/utils/ftl_property.o 00:05:51.046 SO libspdk_vhost.so.8.0 00:05:51.046 SO libspdk_iscsi.so.8.0 00:05:51.046 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:51.046 SYMLINK libspdk_vhost.so 00:05:51.046 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:51.046 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:51.047 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:51.047 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:51.047 SYMLINK libspdk_iscsi.so 00:05:51.047 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:51.305 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:51.305 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:51.305 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:51.305 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:51.305 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:51.305 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:51.305 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:51.305 CC lib/ftl/base/ftl_base_dev.o 00:05:51.305 CC lib/ftl/base/ftl_base_bdev.o 00:05:51.563 CC lib/ftl/ftl_trace.o 00:05:51.821 LIB libspdk_ftl.a 00:05:52.080 LIB libspdk_nvmf.a 00:05:52.080 SO libspdk_ftl.so.9.0 00:05:52.080 SO libspdk_nvmf.so.20.0 00:05:52.338 SYMLINK libspdk_ftl.so 00:05:52.338 SYMLINK libspdk_nvmf.so 00:05:52.910 CC module/env_dpdk/env_dpdk_rpc.o 00:05:52.910 CC module/accel/error/accel_error.o 00:05:52.910 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:52.910 CC module/scheduler/gscheduler/gscheduler.o 00:05:52.910 CC module/sock/posix/posix.o 00:05:52.910 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:52.910 CC module/keyring/file/keyring.o 00:05:52.910 CC module/accel/ioat/accel_ioat.o 00:05:52.910 CC module/blob/bdev/blob_bdev.o 00:05:52.910 CC module/fsdev/aio/fsdev_aio.o 00:05:52.910 LIB libspdk_env_dpdk_rpc.a 00:05:52.910 SO libspdk_env_dpdk_rpc.so.6.0 00:05:53.169 CC module/keyring/file/keyring_rpc.o 00:05:53.169 LIB libspdk_scheduler_dpdk_governor.a 00:05:53.169 SYMLINK libspdk_env_dpdk_rpc.so 00:05:53.169 CC module/accel/ioat/accel_ioat_rpc.o 00:05:53.169 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:53.169 LIB libspdk_scheduler_gscheduler.a 00:05:53.169 CC module/accel/error/accel_error_rpc.o 00:05:53.169 SO libspdk_scheduler_gscheduler.so.4.0 00:05:53.169 LIB libspdk_scheduler_dynamic.a 00:05:53.169 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:53.169 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:53.169 SO libspdk_scheduler_dynamic.so.4.0 00:05:53.169 SYMLINK libspdk_scheduler_gscheduler.so 00:05:53.169 CC module/fsdev/aio/linux_aio_mgr.o 00:05:53.169 LIB libspdk_keyring_file.a 00:05:53.169 LIB libspdk_accel_ioat.a 00:05:53.169 SO libspdk_keyring_file.so.2.0 00:05:53.169 SYMLINK libspdk_scheduler_dynamic.so 00:05:53.430 SO libspdk_accel_ioat.so.6.0 00:05:53.430 LIB libspdk_accel_error.a 00:05:53.430 LIB libspdk_blob_bdev.a 00:05:53.430 SYMLINK libspdk_keyring_file.so 00:05:53.430 SO libspdk_accel_error.so.2.0 00:05:53.430 SO libspdk_blob_bdev.so.12.0 00:05:53.430 CC module/keyring/linux/keyring.o 00:05:53.430 SYMLINK libspdk_accel_ioat.so 00:05:53.430 CC module/keyring/linux/keyring_rpc.o 00:05:53.430 SYMLINK libspdk_accel_error.so 00:05:53.430 SYMLINK libspdk_blob_bdev.so 00:05:53.430 CC module/accel/dsa/accel_dsa.o 00:05:53.430 CC module/accel/dsa/accel_dsa_rpc.o 00:05:53.430 LIB libspdk_keyring_linux.a 00:05:53.689 CC module/accel/iaa/accel_iaa.o 00:05:53.689 SO libspdk_keyring_linux.so.1.0 00:05:53.689 SYMLINK libspdk_keyring_linux.so 00:05:53.689 CC module/bdev/gpt/gpt.o 00:05:53.689 CC module/bdev/delay/vbdev_delay.o 00:05:53.689 CC module/blobfs/bdev/blobfs_bdev.o 00:05:53.690 CC module/bdev/error/vbdev_error.o 00:05:53.690 CC module/accel/iaa/accel_iaa_rpc.o 00:05:53.690 LIB libspdk_accel_dsa.a 00:05:53.690 LIB libspdk_fsdev_aio.a 00:05:53.948 CC module/bdev/lvol/vbdev_lvol.o 00:05:53.948 CC module/bdev/malloc/bdev_malloc.o 00:05:53.948 SO libspdk_fsdev_aio.so.1.0 00:05:53.948 SO libspdk_accel_dsa.so.5.0 00:05:53.948 LIB libspdk_sock_posix.a 00:05:53.948 CC module/bdev/gpt/vbdev_gpt.o 00:05:53.948 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:53.948 SO libspdk_sock_posix.so.6.0 00:05:53.948 LIB libspdk_accel_iaa.a 00:05:53.948 SYMLINK libspdk_accel_dsa.so 00:05:53.948 SYMLINK libspdk_fsdev_aio.so 00:05:53.948 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:53.948 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:53.948 SO libspdk_accel_iaa.so.3.0 00:05:53.948 SYMLINK libspdk_sock_posix.so 00:05:53.948 SYMLINK libspdk_accel_iaa.so 00:05:53.948 CC module/bdev/error/vbdev_error_rpc.o 00:05:53.948 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:54.242 LIB libspdk_blobfs_bdev.a 00:05:54.242 LIB libspdk_bdev_delay.a 00:05:54.242 SO libspdk_blobfs_bdev.so.6.0 00:05:54.242 SO libspdk_bdev_delay.so.6.0 00:05:54.242 LIB libspdk_bdev_error.a 00:05:54.242 LIB libspdk_bdev_gpt.a 00:05:54.242 SYMLINK libspdk_blobfs_bdev.so 00:05:54.242 LIB libspdk_bdev_malloc.a 00:05:54.242 CC module/bdev/null/bdev_null.o 00:05:54.242 CC module/bdev/null/bdev_null_rpc.o 00:05:54.242 SYMLINK libspdk_bdev_delay.so 00:05:54.242 SO libspdk_bdev_error.so.6.0 00:05:54.242 SO libspdk_bdev_gpt.so.6.0 00:05:54.242 SO libspdk_bdev_malloc.so.6.0 00:05:54.242 CC module/bdev/passthru/vbdev_passthru.o 00:05:54.242 CC module/bdev/nvme/bdev_nvme.o 00:05:54.242 SYMLINK libspdk_bdev_error.so 00:05:54.500 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:54.500 SYMLINK libspdk_bdev_malloc.so 00:05:54.500 SYMLINK libspdk_bdev_gpt.so 00:05:54.500 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:54.500 CC module/bdev/raid/bdev_raid.o 00:05:54.500 CC module/bdev/nvme/nvme_rpc.o 00:05:54.500 CC module/bdev/split/vbdev_split.o 00:05:54.500 CC module/bdev/split/vbdev_split_rpc.o 00:05:54.500 LIB libspdk_bdev_lvol.a 00:05:54.500 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:54.759 LIB libspdk_bdev_null.a 00:05:54.759 SO libspdk_bdev_lvol.so.6.0 00:05:54.759 SO libspdk_bdev_null.so.6.0 00:05:54.759 LIB libspdk_bdev_passthru.a 00:05:54.759 SO libspdk_bdev_passthru.so.6.0 00:05:54.759 SYMLINK libspdk_bdev_lvol.so 00:05:54.759 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:54.759 SYMLINK libspdk_bdev_null.so 00:05:54.759 CC module/bdev/nvme/bdev_mdns_client.o 00:05:54.759 CC module/bdev/raid/bdev_raid_rpc.o 00:05:54.759 CC module/bdev/nvme/vbdev_opal.o 00:05:54.759 LIB libspdk_bdev_split.a 00:05:54.759 SYMLINK libspdk_bdev_passthru.so 00:05:54.759 SO libspdk_bdev_split.so.6.0 00:05:55.019 SYMLINK libspdk_bdev_split.so 00:05:55.019 CC module/bdev/aio/bdev_aio.o 00:05:55.019 LIB libspdk_bdev_zone_block.a 00:05:55.019 SO libspdk_bdev_zone_block.so.6.0 00:05:55.019 CC module/bdev/ftl/bdev_ftl.o 00:05:55.019 CC module/bdev/raid/bdev_raid_sb.o 00:05:55.019 CC module/bdev/iscsi/bdev_iscsi.o 00:05:55.019 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:55.019 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:55.019 SYMLINK libspdk_bdev_zone_block.so 00:05:55.019 CC module/bdev/raid/raid0.o 00:05:55.019 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:55.288 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:55.288 CC module/bdev/aio/bdev_aio_rpc.o 00:05:55.288 CC module/bdev/raid/raid1.o 00:05:55.288 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:55.288 CC module/bdev/raid/concat.o 00:05:55.547 LIB libspdk_bdev_ftl.a 00:05:55.547 SO libspdk_bdev_ftl.so.6.0 00:05:55.547 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:55.547 LIB libspdk_bdev_iscsi.a 00:05:55.547 LIB libspdk_bdev_aio.a 00:05:55.547 SO libspdk_bdev_iscsi.so.6.0 00:05:55.547 SYMLINK libspdk_bdev_ftl.so 00:05:55.547 SO libspdk_bdev_aio.so.6.0 00:05:55.547 CC module/bdev/raid/raid5f.o 00:05:55.547 SYMLINK libspdk_bdev_iscsi.so 00:05:55.547 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:55.547 SYMLINK libspdk_bdev_aio.so 00:05:55.807 LIB libspdk_bdev_virtio.a 00:05:55.807 SO libspdk_bdev_virtio.so.6.0 00:05:55.807 SYMLINK libspdk_bdev_virtio.so 00:05:56.067 LIB libspdk_bdev_raid.a 00:05:56.327 SO libspdk_bdev_raid.so.6.0 00:05:56.327 SYMLINK libspdk_bdev_raid.so 00:05:57.708 LIB libspdk_bdev_nvme.a 00:05:57.709 SO libspdk_bdev_nvme.so.7.1 00:05:57.709 SYMLINK libspdk_bdev_nvme.so 00:05:58.276 CC module/event/subsystems/keyring/keyring.o 00:05:58.276 CC module/event/subsystems/scheduler/scheduler.o 00:05:58.276 CC module/event/subsystems/fsdev/fsdev.o 00:05:58.276 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:58.276 CC module/event/subsystems/sock/sock.o 00:05:58.276 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:58.276 CC module/event/subsystems/iobuf/iobuf.o 00:05:58.276 CC module/event/subsystems/vmd/vmd.o 00:05:58.276 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:58.535 LIB libspdk_event_vhost_blk.a 00:05:58.535 LIB libspdk_event_keyring.a 00:05:58.535 LIB libspdk_event_scheduler.a 00:05:58.535 LIB libspdk_event_sock.a 00:05:58.535 SO libspdk_event_vhost_blk.so.3.0 00:05:58.535 LIB libspdk_event_fsdev.a 00:05:58.535 LIB libspdk_event_vmd.a 00:05:58.535 SO libspdk_event_keyring.so.1.0 00:05:58.535 SO libspdk_event_scheduler.so.4.0 00:05:58.535 SO libspdk_event_sock.so.5.0 00:05:58.535 SO libspdk_event_fsdev.so.1.0 00:05:58.535 SO libspdk_event_vmd.so.6.0 00:05:58.535 LIB libspdk_event_iobuf.a 00:05:58.535 SYMLINK libspdk_event_vhost_blk.so 00:05:58.535 SYMLINK libspdk_event_keyring.so 00:05:58.535 SYMLINK libspdk_event_scheduler.so 00:05:58.535 SO libspdk_event_iobuf.so.3.0 00:05:58.535 SYMLINK libspdk_event_fsdev.so 00:05:58.535 SYMLINK libspdk_event_sock.so 00:05:58.535 SYMLINK libspdk_event_vmd.so 00:05:58.535 SYMLINK libspdk_event_iobuf.so 00:05:59.103 CC module/event/subsystems/accel/accel.o 00:05:59.103 LIB libspdk_event_accel.a 00:05:59.363 SO libspdk_event_accel.so.6.0 00:05:59.363 SYMLINK libspdk_event_accel.so 00:05:59.623 CC module/event/subsystems/bdev/bdev.o 00:05:59.881 LIB libspdk_event_bdev.a 00:05:59.882 SO libspdk_event_bdev.so.6.0 00:06:00.140 SYMLINK libspdk_event_bdev.so 00:06:00.399 CC module/event/subsystems/nbd/nbd.o 00:06:00.399 CC module/event/subsystems/ublk/ublk.o 00:06:00.399 CC module/event/subsystems/scsi/scsi.o 00:06:00.399 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:00.399 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:00.399 LIB libspdk_event_nbd.a 00:06:00.697 LIB libspdk_event_ublk.a 00:06:00.697 SO libspdk_event_nbd.so.6.0 00:06:00.697 SO libspdk_event_ublk.so.3.0 00:06:00.697 SYMLINK libspdk_event_nbd.so 00:06:00.697 LIB libspdk_event_scsi.a 00:06:00.697 SYMLINK libspdk_event_ublk.so 00:06:00.697 SO libspdk_event_scsi.so.6.0 00:06:00.697 SYMLINK libspdk_event_scsi.so 00:06:00.697 LIB libspdk_event_nvmf.a 00:06:00.959 SO libspdk_event_nvmf.so.6.0 00:06:00.959 SYMLINK libspdk_event_nvmf.so 00:06:00.959 CC module/event/subsystems/iscsi/iscsi.o 00:06:01.219 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:01.219 LIB libspdk_event_iscsi.a 00:06:01.219 LIB libspdk_event_vhost_scsi.a 00:06:01.219 SO libspdk_event_iscsi.so.6.0 00:06:01.477 SO libspdk_event_vhost_scsi.so.3.0 00:06:01.477 SYMLINK libspdk_event_iscsi.so 00:06:01.477 SYMLINK libspdk_event_vhost_scsi.so 00:06:01.736 SO libspdk.so.6.0 00:06:01.736 SYMLINK libspdk.so 00:06:01.996 CC app/trace_record/trace_record.o 00:06:01.996 CXX app/trace/trace.o 00:06:01.996 CC app/spdk_lspci/spdk_lspci.o 00:06:01.996 CC app/iscsi_tgt/iscsi_tgt.o 00:06:01.996 CC app/nvmf_tgt/nvmf_main.o 00:06:01.996 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:01.996 CC app/spdk_tgt/spdk_tgt.o 00:06:01.996 CC examples/util/zipf/zipf.o 00:06:01.996 CC examples/ioat/perf/perf.o 00:06:01.996 CC test/thread/poller_perf/poller_perf.o 00:06:01.996 LINK spdk_lspci 00:06:02.255 LINK nvmf_tgt 00:06:02.255 LINK poller_perf 00:06:02.255 LINK zipf 00:06:02.255 LINK iscsi_tgt 00:06:02.255 LINK interrupt_tgt 00:06:02.255 LINK spdk_trace_record 00:06:02.255 LINK ioat_perf 00:06:02.513 LINK spdk_trace 00:06:02.513 LINK spdk_tgt 00:06:02.513 TEST_HEADER include/spdk/accel.h 00:06:02.513 TEST_HEADER include/spdk/accel_module.h 00:06:02.513 TEST_HEADER include/spdk/assert.h 00:06:02.513 CC examples/ioat/verify/verify.o 00:06:02.513 TEST_HEADER include/spdk/barrier.h 00:06:02.771 TEST_HEADER include/spdk/base64.h 00:06:02.772 TEST_HEADER include/spdk/bdev.h 00:06:02.772 TEST_HEADER include/spdk/bdev_module.h 00:06:02.772 TEST_HEADER include/spdk/bdev_zone.h 00:06:02.772 TEST_HEADER include/spdk/bit_array.h 00:06:02.772 TEST_HEADER include/spdk/bit_pool.h 00:06:02.772 TEST_HEADER include/spdk/blob_bdev.h 00:06:02.772 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:02.772 TEST_HEADER include/spdk/blobfs.h 00:06:02.772 TEST_HEADER include/spdk/blob.h 00:06:02.772 TEST_HEADER include/spdk/conf.h 00:06:02.772 TEST_HEADER include/spdk/config.h 00:06:02.772 CC app/spdk_nvme_perf/perf.o 00:06:02.772 TEST_HEADER include/spdk/cpuset.h 00:06:02.772 CC test/dma/test_dma/test_dma.o 00:06:02.772 TEST_HEADER include/spdk/crc16.h 00:06:02.772 TEST_HEADER include/spdk/crc32.h 00:06:02.772 TEST_HEADER include/spdk/crc64.h 00:06:02.772 TEST_HEADER include/spdk/dif.h 00:06:02.772 TEST_HEADER include/spdk/dma.h 00:06:02.772 CC test/app/bdev_svc/bdev_svc.o 00:06:02.772 TEST_HEADER include/spdk/endian.h 00:06:02.772 TEST_HEADER include/spdk/env_dpdk.h 00:06:02.772 TEST_HEADER include/spdk/env.h 00:06:02.772 TEST_HEADER include/spdk/event.h 00:06:02.772 TEST_HEADER include/spdk/fd_group.h 00:06:02.772 CC app/spdk_nvme_identify/identify.o 00:06:02.772 TEST_HEADER include/spdk/fd.h 00:06:02.772 TEST_HEADER include/spdk/file.h 00:06:02.772 TEST_HEADER include/spdk/fsdev.h 00:06:02.772 TEST_HEADER include/spdk/fsdev_module.h 00:06:02.772 TEST_HEADER include/spdk/ftl.h 00:06:02.772 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:02.772 TEST_HEADER include/spdk/gpt_spec.h 00:06:02.772 TEST_HEADER include/spdk/hexlify.h 00:06:02.772 TEST_HEADER include/spdk/histogram_data.h 00:06:02.772 CC app/spdk_nvme_discover/discovery_aer.o 00:06:02.772 TEST_HEADER include/spdk/idxd.h 00:06:02.772 TEST_HEADER include/spdk/idxd_spec.h 00:06:02.772 TEST_HEADER include/spdk/init.h 00:06:02.772 TEST_HEADER include/spdk/ioat.h 00:06:02.772 TEST_HEADER include/spdk/ioat_spec.h 00:06:02.772 TEST_HEADER include/spdk/iscsi_spec.h 00:06:02.772 TEST_HEADER include/spdk/json.h 00:06:02.772 TEST_HEADER include/spdk/jsonrpc.h 00:06:02.772 TEST_HEADER include/spdk/keyring.h 00:06:02.772 TEST_HEADER include/spdk/keyring_module.h 00:06:02.772 TEST_HEADER include/spdk/likely.h 00:06:02.772 TEST_HEADER include/spdk/log.h 00:06:02.772 TEST_HEADER include/spdk/lvol.h 00:06:02.772 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:02.772 TEST_HEADER include/spdk/md5.h 00:06:02.772 TEST_HEADER include/spdk/memory.h 00:06:02.772 TEST_HEADER include/spdk/mmio.h 00:06:02.772 TEST_HEADER include/spdk/nbd.h 00:06:02.772 TEST_HEADER include/spdk/net.h 00:06:02.772 TEST_HEADER include/spdk/notify.h 00:06:02.772 TEST_HEADER include/spdk/nvme.h 00:06:02.772 TEST_HEADER include/spdk/nvme_intel.h 00:06:02.772 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:02.772 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:02.772 TEST_HEADER include/spdk/nvme_spec.h 00:06:02.772 TEST_HEADER include/spdk/nvme_zns.h 00:06:02.772 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:02.772 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:02.772 TEST_HEADER include/spdk/nvmf.h 00:06:02.772 TEST_HEADER include/spdk/nvmf_spec.h 00:06:02.772 TEST_HEADER include/spdk/nvmf_transport.h 00:06:02.772 TEST_HEADER include/spdk/opal.h 00:06:02.772 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:02.772 TEST_HEADER include/spdk/opal_spec.h 00:06:02.772 TEST_HEADER include/spdk/pci_ids.h 00:06:02.772 TEST_HEADER include/spdk/pipe.h 00:06:02.772 TEST_HEADER include/spdk/queue.h 00:06:02.772 TEST_HEADER include/spdk/reduce.h 00:06:02.772 TEST_HEADER include/spdk/rpc.h 00:06:02.772 TEST_HEADER include/spdk/scheduler.h 00:06:02.772 TEST_HEADER include/spdk/scsi.h 00:06:02.772 TEST_HEADER include/spdk/scsi_spec.h 00:06:02.772 TEST_HEADER include/spdk/sock.h 00:06:02.772 TEST_HEADER include/spdk/stdinc.h 00:06:02.772 TEST_HEADER include/spdk/string.h 00:06:02.772 TEST_HEADER include/spdk/thread.h 00:06:02.772 TEST_HEADER include/spdk/trace.h 00:06:02.772 TEST_HEADER include/spdk/trace_parser.h 00:06:02.772 TEST_HEADER include/spdk/tree.h 00:06:02.772 TEST_HEADER include/spdk/ublk.h 00:06:02.772 TEST_HEADER include/spdk/util.h 00:06:02.772 TEST_HEADER include/spdk/uuid.h 00:06:02.772 TEST_HEADER include/spdk/version.h 00:06:02.772 CC test/env/mem_callbacks/mem_callbacks.o 00:06:02.772 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:02.772 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:02.772 TEST_HEADER include/spdk/vhost.h 00:06:02.772 TEST_HEADER include/spdk/vmd.h 00:06:02.772 TEST_HEADER include/spdk/xor.h 00:06:02.772 TEST_HEADER include/spdk/zipf.h 00:06:02.772 CXX test/cpp_headers/accel.o 00:06:02.772 LINK bdev_svc 00:06:03.031 LINK verify 00:06:03.031 CXX test/cpp_headers/accel_module.o 00:06:03.031 LINK spdk_nvme_discover 00:06:03.288 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:03.288 CXX test/cpp_headers/assert.o 00:06:03.288 LINK test_dma 00:06:03.288 CC examples/thread/thread/thread_ex.o 00:06:03.288 LINK nvme_fuzz 00:06:03.288 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:03.288 CXX test/cpp_headers/barrier.o 00:06:03.546 LINK mem_callbacks 00:06:03.546 CXX test/cpp_headers/base64.o 00:06:03.546 CC test/event/event_perf/event_perf.o 00:06:03.805 LINK thread 00:06:03.805 CC test/event/reactor/reactor.o 00:06:03.805 CXX test/cpp_headers/bdev.o 00:06:03.805 LINK event_perf 00:06:03.805 CC test/env/vtophys/vtophys.o 00:06:03.805 CXX test/cpp_headers/bdev_module.o 00:06:03.805 LINK vhost_fuzz 00:06:04.064 CXX test/cpp_headers/bdev_zone.o 00:06:04.064 LINK reactor 00:06:04.064 LINK vtophys 00:06:04.064 LINK spdk_nvme_perf 00:06:04.064 CC app/spdk_top/spdk_top.o 00:06:04.064 CC test/event/reactor_perf/reactor_perf.o 00:06:04.064 CXX test/cpp_headers/bit_array.o 00:06:04.064 CC examples/sock/hello_world/hello_sock.o 00:06:04.321 CC test/event/app_repeat/app_repeat.o 00:06:04.321 CXX test/cpp_headers/bit_pool.o 00:06:04.321 LINK spdk_nvme_identify 00:06:04.321 LINK reactor_perf 00:06:04.321 CC test/event/scheduler/scheduler.o 00:06:04.321 LINK app_repeat 00:06:04.321 CXX test/cpp_headers/blob_bdev.o 00:06:04.321 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:04.579 CC app/vhost/vhost.o 00:06:04.579 CXX test/cpp_headers/blobfs_bdev.o 00:06:04.579 LINK hello_sock 00:06:04.579 LINK scheduler 00:06:04.579 LINK env_dpdk_post_init 00:06:04.579 CC app/spdk_dd/spdk_dd.o 00:06:04.579 CXX test/cpp_headers/blobfs.o 00:06:04.838 LINK vhost 00:06:04.838 CC examples/vmd/lsvmd/lsvmd.o 00:06:04.838 CC examples/vmd/led/led.o 00:06:04.838 CC test/env/memory/memory_ut.o 00:06:04.838 CXX test/cpp_headers/blob.o 00:06:05.097 CC test/env/pci/pci_ut.o 00:06:05.097 LINK lsvmd 00:06:05.097 CC test/app/histogram_perf/histogram_perf.o 00:06:05.097 LINK led 00:06:05.097 CXX test/cpp_headers/conf.o 00:06:05.097 LINK spdk_dd 00:06:05.097 CC app/fio/nvme/fio_plugin.o 00:06:05.097 LINK histogram_perf 00:06:05.097 LINK spdk_top 00:06:05.097 LINK iscsi_fuzz 00:06:05.355 CXX test/cpp_headers/config.o 00:06:05.355 CXX test/cpp_headers/cpuset.o 00:06:05.355 CC app/fio/bdev/fio_plugin.o 00:06:05.355 CC examples/idxd/perf/perf.o 00:06:05.355 LINK pci_ut 00:06:05.613 CXX test/cpp_headers/crc16.o 00:06:05.613 CC test/app/jsoncat/jsoncat.o 00:06:05.613 CC examples/accel/perf/accel_perf.o 00:06:05.613 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:05.613 CC examples/blob/hello_world/hello_blob.o 00:06:05.613 CXX test/cpp_headers/crc32.o 00:06:05.613 LINK jsoncat 00:06:05.871 CXX test/cpp_headers/crc64.o 00:06:05.871 LINK idxd_perf 00:06:05.871 LINK spdk_nvme 00:06:05.871 LINK hello_fsdev 00:06:05.871 LINK hello_blob 00:06:05.871 LINK spdk_bdev 00:06:05.871 CC test/rpc_client/rpc_client_test.o 00:06:05.871 CXX test/cpp_headers/dif.o 00:06:05.871 CC test/app/stub/stub.o 00:06:06.130 CXX test/cpp_headers/dma.o 00:06:06.130 CXX test/cpp_headers/endian.o 00:06:06.130 CXX test/cpp_headers/env_dpdk.o 00:06:06.130 LINK rpc_client_test 00:06:06.130 LINK stub 00:06:06.130 CC test/accel/dif/dif.o 00:06:06.130 LINK accel_perf 00:06:06.130 CXX test/cpp_headers/env.o 00:06:06.130 LINK memory_ut 00:06:06.388 CC examples/blob/cli/blobcli.o 00:06:06.388 CC examples/nvme/hello_world/hello_world.o 00:06:06.388 CXX test/cpp_headers/event.o 00:06:06.388 CXX test/cpp_headers/fd_group.o 00:06:06.388 CC test/blobfs/mkfs/mkfs.o 00:06:06.648 CC examples/nvme/reconnect/reconnect.o 00:06:06.648 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:06.648 CXX test/cpp_headers/fd.o 00:06:06.648 CC test/lvol/esnap/esnap.o 00:06:06.648 CC test/nvme/aer/aer.o 00:06:06.648 LINK hello_world 00:06:06.648 CC examples/nvme/arbitration/arbitration.o 00:06:06.648 LINK mkfs 00:06:06.648 CXX test/cpp_headers/file.o 00:06:06.907 LINK blobcli 00:06:06.907 CC test/nvme/reset/reset.o 00:06:06.907 CXX test/cpp_headers/fsdev.o 00:06:06.907 LINK aer 00:06:06.907 LINK reconnect 00:06:06.907 LINK dif 00:06:06.907 CC test/nvme/sgl/sgl.o 00:06:07.166 LINK arbitration 00:06:07.166 CXX test/cpp_headers/fsdev_module.o 00:06:07.166 CXX test/cpp_headers/ftl.o 00:06:07.166 LINK nvme_manage 00:06:07.166 LINK reset 00:06:07.166 CC test/nvme/e2edp/nvme_dp.o 00:06:07.424 CC examples/nvme/hotplug/hotplug.o 00:06:07.424 CXX test/cpp_headers/fuse_dispatcher.o 00:06:07.424 LINK sgl 00:06:07.424 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:07.424 CC test/nvme/overhead/overhead.o 00:06:07.424 CXX test/cpp_headers/gpt_spec.o 00:06:07.424 CC examples/bdev/hello_world/hello_bdev.o 00:06:07.683 CC examples/bdev/bdevperf/bdevperf.o 00:06:07.683 LINK hotplug 00:06:07.683 LINK cmb_copy 00:06:07.683 CC examples/nvme/abort/abort.o 00:06:07.683 CXX test/cpp_headers/hexlify.o 00:06:07.683 CC test/nvme/err_injection/err_injection.o 00:06:07.683 LINK nvme_dp 00:06:07.683 LINK hello_bdev 00:06:07.942 LINK overhead 00:06:07.942 CXX test/cpp_headers/histogram_data.o 00:06:07.942 CXX test/cpp_headers/idxd.o 00:06:07.942 LINK err_injection 00:06:07.942 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:07.942 CC test/nvme/startup/startup.o 00:06:07.942 CXX test/cpp_headers/idxd_spec.o 00:06:08.200 CXX test/cpp_headers/init.o 00:06:08.200 CXX test/cpp_headers/ioat.o 00:06:08.200 CC test/nvme/reserve/reserve.o 00:06:08.200 LINK abort 00:06:08.200 LINK pmr_persistence 00:06:08.200 CC test/nvme/simple_copy/simple_copy.o 00:06:08.200 LINK startup 00:06:08.200 CXX test/cpp_headers/ioat_spec.o 00:06:08.510 CC test/nvme/connect_stress/connect_stress.o 00:06:08.510 CC test/nvme/boot_partition/boot_partition.o 00:06:08.510 CXX test/cpp_headers/iscsi_spec.o 00:06:08.510 CXX test/cpp_headers/json.o 00:06:08.510 LINK reserve 00:06:08.510 LINK simple_copy 00:06:08.510 LINK connect_stress 00:06:08.510 LINK bdevperf 00:06:08.510 CC test/nvme/compliance/nvme_compliance.o 00:06:08.510 CXX test/cpp_headers/jsonrpc.o 00:06:08.510 LINK boot_partition 00:06:08.787 CC test/nvme/fused_ordering/fused_ordering.o 00:06:08.787 CC test/bdev/bdevio/bdevio.o 00:06:08.787 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:08.787 CXX test/cpp_headers/keyring.o 00:06:08.787 CXX test/cpp_headers/keyring_module.o 00:06:08.787 CC test/nvme/cuse/cuse.o 00:06:08.787 CC test/nvme/fdp/fdp.o 00:06:08.787 LINK fused_ordering 00:06:09.047 LINK nvme_compliance 00:06:09.047 CXX test/cpp_headers/likely.o 00:06:09.047 LINK doorbell_aers 00:06:09.047 CXX test/cpp_headers/log.o 00:06:09.047 CC examples/nvmf/nvmf/nvmf.o 00:06:09.047 CXX test/cpp_headers/lvol.o 00:06:09.047 CXX test/cpp_headers/md5.o 00:06:09.307 LINK bdevio 00:06:09.307 CXX test/cpp_headers/memory.o 00:06:09.307 CXX test/cpp_headers/mmio.o 00:06:09.307 LINK fdp 00:06:09.307 CXX test/cpp_headers/nbd.o 00:06:09.307 CXX test/cpp_headers/net.o 00:06:09.307 CXX test/cpp_headers/notify.o 00:06:09.307 CXX test/cpp_headers/nvme.o 00:06:09.307 CXX test/cpp_headers/nvme_intel.o 00:06:09.566 CXX test/cpp_headers/nvme_ocssd.o 00:06:09.566 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:09.566 LINK nvmf 00:06:09.566 CXX test/cpp_headers/nvme_spec.o 00:06:09.566 CXX test/cpp_headers/nvme_zns.o 00:06:09.566 CXX test/cpp_headers/nvmf_cmd.o 00:06:09.566 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:09.566 CXX test/cpp_headers/nvmf.o 00:06:09.566 CXX test/cpp_headers/nvmf_spec.o 00:06:09.566 CXX test/cpp_headers/nvmf_transport.o 00:06:09.566 CXX test/cpp_headers/opal.o 00:06:09.826 CXX test/cpp_headers/opal_spec.o 00:06:09.826 CXX test/cpp_headers/pci_ids.o 00:06:09.826 CXX test/cpp_headers/pipe.o 00:06:09.826 CXX test/cpp_headers/queue.o 00:06:09.826 CXX test/cpp_headers/reduce.o 00:06:09.826 CXX test/cpp_headers/rpc.o 00:06:09.826 CXX test/cpp_headers/scheduler.o 00:06:09.826 CXX test/cpp_headers/scsi.o 00:06:09.826 CXX test/cpp_headers/scsi_spec.o 00:06:09.826 CXX test/cpp_headers/sock.o 00:06:09.826 CXX test/cpp_headers/stdinc.o 00:06:09.826 CXX test/cpp_headers/string.o 00:06:10.084 CXX test/cpp_headers/thread.o 00:06:10.084 CXX test/cpp_headers/trace.o 00:06:10.084 CXX test/cpp_headers/trace_parser.o 00:06:10.084 CXX test/cpp_headers/tree.o 00:06:10.084 CXX test/cpp_headers/ublk.o 00:06:10.084 CXX test/cpp_headers/util.o 00:06:10.084 CXX test/cpp_headers/uuid.o 00:06:10.084 CXX test/cpp_headers/version.o 00:06:10.084 CXX test/cpp_headers/vfio_user_pci.o 00:06:10.084 CXX test/cpp_headers/vfio_user_spec.o 00:06:10.084 CXX test/cpp_headers/vhost.o 00:06:10.084 CXX test/cpp_headers/vmd.o 00:06:10.341 CXX test/cpp_headers/xor.o 00:06:10.341 CXX test/cpp_headers/zipf.o 00:06:10.600 LINK cuse 00:06:13.951 LINK esnap 00:06:14.210 00:06:14.210 real 1m48.358s 00:06:14.210 user 9m21.198s 00:06:14.210 sys 1m56.679s 00:06:14.210 06:15:58 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:14.210 06:15:58 make -- common/autotest_common.sh@10 -- $ set +x 00:06:14.210 ************************************ 00:06:14.210 END TEST make 00:06:14.210 ************************************ 00:06:14.210 06:15:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:14.210 06:15:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:14.210 06:15:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:14.210 06:15:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:14.210 06:15:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:14.210 06:15:58 -- pm/common@44 -- $ pid=5466 00:06:14.210 06:15:58 -- pm/common@50 -- $ kill -TERM 5466 00:06:14.210 06:15:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:14.210 06:15:58 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:14.210 06:15:58 -- pm/common@44 -- $ pid=5467 00:06:14.210 06:15:58 -- pm/common@50 -- $ kill -TERM 5467 00:06:14.210 06:15:58 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:14.210 06:15:58 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:14.211 06:15:58 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.211 06:15:58 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.211 06:15:58 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.211 06:15:58 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.211 06:15:58 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.211 06:15:58 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.211 06:15:58 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.211 06:15:58 -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.211 06:15:58 -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.211 06:15:58 -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.211 06:15:58 -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.211 06:15:58 -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.211 06:15:58 -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.211 06:15:58 -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.211 06:15:58 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.211 06:15:58 -- scripts/common.sh@344 -- # case "$op" in 00:06:14.211 06:15:58 -- scripts/common.sh@345 -- # : 1 00:06:14.211 06:15:58 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.211 06:15:58 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.211 06:15:58 -- scripts/common.sh@365 -- # decimal 1 00:06:14.211 06:15:58 -- scripts/common.sh@353 -- # local d=1 00:06:14.211 06:15:58 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.211 06:15:58 -- scripts/common.sh@355 -- # echo 1 00:06:14.211 06:15:58 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.211 06:15:58 -- scripts/common.sh@366 -- # decimal 2 00:06:14.211 06:15:58 -- scripts/common.sh@353 -- # local d=2 00:06:14.211 06:15:58 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.211 06:15:58 -- scripts/common.sh@355 -- # echo 2 00:06:14.211 06:15:58 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.211 06:15:58 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.211 06:15:58 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.211 06:15:58 -- scripts/common.sh@368 -- # return 0 00:06:14.211 06:15:58 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.211 06:15:58 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.211 --rc genhtml_branch_coverage=1 00:06:14.211 --rc genhtml_function_coverage=1 00:06:14.211 --rc genhtml_legend=1 00:06:14.211 --rc geninfo_all_blocks=1 00:06:14.211 --rc geninfo_unexecuted_blocks=1 00:06:14.211 00:06:14.211 ' 00:06:14.211 06:15:58 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.211 --rc genhtml_branch_coverage=1 00:06:14.211 --rc genhtml_function_coverage=1 00:06:14.211 --rc genhtml_legend=1 00:06:14.211 --rc geninfo_all_blocks=1 00:06:14.211 --rc geninfo_unexecuted_blocks=1 00:06:14.211 00:06:14.211 ' 00:06:14.211 06:15:58 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.211 --rc genhtml_branch_coverage=1 00:06:14.211 --rc genhtml_function_coverage=1 00:06:14.211 --rc genhtml_legend=1 00:06:14.211 --rc geninfo_all_blocks=1 00:06:14.211 --rc geninfo_unexecuted_blocks=1 00:06:14.211 00:06:14.211 ' 00:06:14.211 06:15:58 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.211 --rc genhtml_branch_coverage=1 00:06:14.211 --rc genhtml_function_coverage=1 00:06:14.211 --rc genhtml_legend=1 00:06:14.211 --rc geninfo_all_blocks=1 00:06:14.211 --rc geninfo_unexecuted_blocks=1 00:06:14.211 00:06:14.211 ' 00:06:14.211 06:15:58 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:14.211 06:15:58 -- nvmf/common.sh@7 -- # uname -s 00:06:14.211 06:15:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.211 06:15:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.211 06:15:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.211 06:15:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.211 06:15:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.211 06:15:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.211 06:15:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.211 06:15:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.211 06:15:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.470 06:15:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.470 06:15:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f76d946-b2f9-4be1-9537-21eaa0074f60 00:06:14.470 06:15:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=6f76d946-b2f9-4be1-9537-21eaa0074f60 00:06:14.470 06:15:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.470 06:15:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.470 06:15:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:14.470 06:15:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.470 06:15:58 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:14.470 06:15:58 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.470 06:15:58 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.470 06:15:58 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.470 06:15:58 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.470 06:15:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.470 06:15:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.470 06:15:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.470 06:15:58 -- paths/export.sh@5 -- # export PATH 00:06:14.470 06:15:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.470 06:15:58 -- nvmf/common.sh@51 -- # : 0 00:06:14.470 06:15:58 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.470 06:15:58 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.470 06:15:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.470 06:15:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.470 06:15:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.470 06:15:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.470 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.470 06:15:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.470 06:15:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.470 06:15:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.470 06:15:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:14.470 06:15:58 -- spdk/autotest.sh@32 -- # uname -s 00:06:14.470 06:15:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:14.470 06:15:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:14.470 06:15:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:14.470 06:15:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:14.470 06:15:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:14.470 06:15:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:14.470 06:15:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:14.470 06:15:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:14.470 06:15:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54701 00:06:14.470 06:15:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:14.470 06:15:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:14.470 06:15:58 -- pm/common@17 -- # local monitor 00:06:14.470 06:15:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:14.470 06:15:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:14.470 06:15:58 -- pm/common@21 -- # date +%s 00:06:14.470 06:15:58 -- pm/common@25 -- # sleep 1 00:06:14.470 06:15:58 -- pm/common@21 -- # date +%s 00:06:14.470 06:15:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732601758 00:06:14.470 06:15:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732601758 00:06:14.470 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732601758_collect-vmstat.pm.log 00:06:14.470 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732601758_collect-cpu-load.pm.log 00:06:15.405 06:15:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:15.405 06:15:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:15.405 06:15:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:15.405 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:15.405 06:15:59 -- spdk/autotest.sh@59 -- # create_test_list 00:06:15.405 06:15:59 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:15.405 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:06:15.405 06:15:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:15.405 06:15:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:15.405 06:15:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:15.405 06:15:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:15.405 06:15:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:15.405 06:15:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:15.405 06:15:59 -- common/autotest_common.sh@1457 -- # uname 00:06:15.405 06:15:59 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:15.405 06:15:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:15.405 06:15:59 -- common/autotest_common.sh@1477 -- # uname 00:06:15.405 06:15:59 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:15.405 06:15:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:15.405 06:15:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:15.663 lcov: LCOV version 1.15 00:06:15.663 06:15:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:33.749 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:33.749 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:48.642 06:16:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:48.642 06:16:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.642 06:16:31 -- common/autotest_common.sh@10 -- # set +x 00:06:48.642 06:16:31 -- spdk/autotest.sh@78 -- # rm -f 00:06:48.642 06:16:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:48.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.642 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:48.642 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:48.642 06:16:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:48.642 06:16:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:48.642 06:16:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:48.642 06:16:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:48.642 06:16:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.642 06:16:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:48.642 06:16:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:48.642 06:16:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.642 06:16:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:48.642 06:16:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:48.642 06:16:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.642 06:16:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:48.642 06:16:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:48.642 06:16:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:48.642 06:16:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:48.642 06:16:32 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:48.642 06:16:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:48.642 06:16:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:48.642 06:16:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:48.642 06:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:48.642 06:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:48.642 06:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:48.642 06:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:48.642 06:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:48.642 No valid GPT data, bailing 00:06:48.642 06:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:48.900 06:16:32 -- scripts/common.sh@394 -- # pt= 00:06:48.900 06:16:32 -- scripts/common.sh@395 -- # return 1 00:06:48.900 06:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:48.900 1+0 records in 00:06:48.900 1+0 records out 00:06:48.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00682208 s, 154 MB/s 00:06:48.900 06:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:48.900 06:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:48.900 06:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:48.900 06:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:48.901 06:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:48.901 No valid GPT data, bailing 00:06:48.901 06:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:48.901 06:16:32 -- scripts/common.sh@394 -- # pt= 00:06:48.901 06:16:32 -- scripts/common.sh@395 -- # return 1 00:06:48.901 06:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:48.901 1+0 records in 00:06:48.901 1+0 records out 00:06:48.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00592884 s, 177 MB/s 00:06:48.901 06:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:48.901 06:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:48.901 06:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:48.901 06:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:48.901 06:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:48.901 No valid GPT data, bailing 00:06:48.901 06:16:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:48.901 06:16:32 -- scripts/common.sh@394 -- # pt= 00:06:48.901 06:16:32 -- scripts/common.sh@395 -- # return 1 00:06:48.901 06:16:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:48.901 1+0 records in 00:06:48.901 1+0 records out 00:06:48.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529696 s, 198 MB/s 00:06:48.901 06:16:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:48.901 06:16:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:48.901 06:16:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:48.901 06:16:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:48.901 06:16:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:48.901 No valid GPT data, bailing 00:06:48.901 06:16:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:49.160 06:16:33 -- scripts/common.sh@394 -- # pt= 00:06:49.160 06:16:33 -- scripts/common.sh@395 -- # return 1 00:06:49.160 06:16:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:49.160 1+0 records in 00:06:49.160 1+0 records out 00:06:49.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039754 s, 264 MB/s 00:06:49.160 06:16:33 -- spdk/autotest.sh@105 -- # sync 00:06:49.160 06:16:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:49.160 06:16:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:49.160 06:16:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:51.700 06:16:35 -- spdk/autotest.sh@111 -- # uname -s 00:06:51.700 06:16:35 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:51.700 06:16:35 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:51.700 06:16:35 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:52.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.683 Hugepages 00:06:52.683 node hugesize free / total 00:06:52.683 node0 1048576kB 0 / 0 00:06:52.683 node0 2048kB 0 / 0 00:06:52.683 00:06:52.683 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:52.683 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:52.683 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:52.683 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:52.683 06:16:36 -- spdk/autotest.sh@117 -- # uname -s 00:06:52.683 06:16:36 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:52.683 06:16:36 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:52.683 06:16:36 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:53.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:53.621 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.621 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:53.880 06:16:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:54.818 06:16:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:54.818 06:16:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:54.818 06:16:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:54.818 06:16:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:54.818 06:16:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:54.818 06:16:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:54.818 06:16:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:54.818 06:16:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.818 06:16:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:54.818 06:16:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:54.818 06:16:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:54.818 06:16:38 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:55.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.385 Waiting for block devices as requested 00:06:55.385 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.385 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:55.645 06:16:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:55.645 06:16:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:55.645 06:16:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:55.645 06:16:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:55.645 06:16:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:55.645 06:16:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1543 -- # continue 00:06:55.645 06:16:39 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:55.645 06:16:39 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:55.645 06:16:39 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:55.645 06:16:39 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:55.645 06:16:39 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:55.645 06:16:39 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:55.645 06:16:39 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:55.645 06:16:39 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:55.645 06:16:39 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:55.645 06:16:39 -- common/autotest_common.sh@1543 -- # continue 00:06:55.645 06:16:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:55.645 06:16:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.645 06:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.645 06:16:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:55.645 06:16:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.645 06:16:39 -- common/autotest_common.sh@10 -- # set +x 00:06:55.645 06:16:39 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:56.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:56.584 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.584 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:56.843 06:16:40 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:56.844 06:16:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.844 06:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.844 06:16:40 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:56.844 06:16:40 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:56.844 06:16:40 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:56.844 06:16:40 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:56.844 06:16:40 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:56.844 06:16:40 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:56.844 06:16:40 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:56.844 06:16:40 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:56.844 06:16:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:56.844 06:16:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:56.844 06:16:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:56.844 06:16:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.844 06:16:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:56.844 06:16:40 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:56.844 06:16:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:56.844 06:16:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:56.844 06:16:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:56.844 06:16:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:56.844 06:16:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:56.844 06:16:40 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:56.844 06:16:40 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:56.844 06:16:40 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:56.844 06:16:40 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:56.844 06:16:40 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:56.844 06:16:40 -- common/autotest_common.sh@1572 -- # return 0 00:06:56.844 06:16:40 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:56.844 06:16:40 -- common/autotest_common.sh@1580 -- # return 0 00:06:56.844 06:16:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:56.844 06:16:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:56.844 06:16:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:56.844 06:16:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:56.844 06:16:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:56.844 06:16:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.844 06:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.844 06:16:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:56.844 06:16:40 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:56.844 06:16:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.844 06:16:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.844 06:16:40 -- common/autotest_common.sh@10 -- # set +x 00:06:56.844 ************************************ 00:06:56.844 START TEST env 00:06:56.844 ************************************ 00:06:56.844 06:16:40 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:57.106 * Looking for test storage... 00:06:57.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.106 06:16:41 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.106 06:16:41 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.106 06:16:41 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.106 06:16:41 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.106 06:16:41 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.106 06:16:41 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.106 06:16:41 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.106 06:16:41 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.106 06:16:41 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.106 06:16:41 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.106 06:16:41 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.106 06:16:41 env -- scripts/common.sh@344 -- # case "$op" in 00:06:57.106 06:16:41 env -- scripts/common.sh@345 -- # : 1 00:06:57.106 06:16:41 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.106 06:16:41 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.106 06:16:41 env -- scripts/common.sh@365 -- # decimal 1 00:06:57.106 06:16:41 env -- scripts/common.sh@353 -- # local d=1 00:06:57.106 06:16:41 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.106 06:16:41 env -- scripts/common.sh@355 -- # echo 1 00:06:57.106 06:16:41 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.106 06:16:41 env -- scripts/common.sh@366 -- # decimal 2 00:06:57.106 06:16:41 env -- scripts/common.sh@353 -- # local d=2 00:06:57.106 06:16:41 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.106 06:16:41 env -- scripts/common.sh@355 -- # echo 2 00:06:57.106 06:16:41 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.106 06:16:41 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.106 06:16:41 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.106 06:16:41 env -- scripts/common.sh@368 -- # return 0 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.106 --rc genhtml_branch_coverage=1 00:06:57.106 --rc genhtml_function_coverage=1 00:06:57.106 --rc genhtml_legend=1 00:06:57.106 --rc geninfo_all_blocks=1 00:06:57.106 --rc geninfo_unexecuted_blocks=1 00:06:57.106 00:06:57.106 ' 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.106 --rc genhtml_branch_coverage=1 00:06:57.106 --rc genhtml_function_coverage=1 00:06:57.106 --rc genhtml_legend=1 00:06:57.106 --rc geninfo_all_blocks=1 00:06:57.106 --rc geninfo_unexecuted_blocks=1 00:06:57.106 00:06:57.106 ' 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.106 --rc genhtml_branch_coverage=1 00:06:57.106 --rc genhtml_function_coverage=1 00:06:57.106 --rc genhtml_legend=1 00:06:57.106 --rc geninfo_all_blocks=1 00:06:57.106 --rc geninfo_unexecuted_blocks=1 00:06:57.106 00:06:57.106 ' 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.106 --rc genhtml_branch_coverage=1 00:06:57.106 --rc genhtml_function_coverage=1 00:06:57.106 --rc genhtml_legend=1 00:06:57.106 --rc geninfo_all_blocks=1 00:06:57.106 --rc geninfo_unexecuted_blocks=1 00:06:57.106 00:06:57.106 ' 00:06:57.106 06:16:41 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.106 06:16:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.106 06:16:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.106 ************************************ 00:06:57.106 START TEST env_memory 00:06:57.106 ************************************ 00:06:57.106 06:16:41 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:57.106 00:06:57.106 00:06:57.106 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.106 http://cunit.sourceforge.net/ 00:06:57.106 00:06:57.106 00:06:57.106 Suite: memory 00:06:57.106 Test: alloc and free memory map ...[2024-11-26 06:16:41.231736] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:57.368 passed 00:06:57.368 Test: mem map translation ...[2024-11-26 06:16:41.277659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:57.368 [2024-11-26 06:16:41.277749] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:57.368 [2024-11-26 06:16:41.277839] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:57.368 [2024-11-26 06:16:41.277899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:57.368 passed 00:06:57.368 Test: mem map registration ...[2024-11-26 06:16:41.346636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:57.368 [2024-11-26 06:16:41.346728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:57.368 passed 00:06:57.369 Test: mem map adjacent registrations ...passed 00:06:57.369 00:06:57.369 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.369 suites 1 1 n/a 0 0 00:06:57.369 tests 4 4 4 0 0 00:06:57.369 asserts 152 152 152 0 n/a 00:06:57.369 00:06:57.369 Elapsed time = 0.246 seconds 00:06:57.369 00:06:57.369 real 0m0.285s 00:06:57.369 user 0m0.253s 00:06:57.369 sys 0m0.025s 00:06:57.369 06:16:41 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.369 06:16:41 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:57.369 ************************************ 00:06:57.369 END TEST env_memory 00:06:57.369 ************************************ 00:06:57.628 06:16:41 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:57.628 06:16:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.628 06:16:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.628 06:16:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.628 ************************************ 00:06:57.628 START TEST env_vtophys 00:06:57.628 ************************************ 00:06:57.628 06:16:41 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:57.628 EAL: lib.eal log level changed from notice to debug 00:06:57.628 EAL: Detected lcore 0 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 1 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 2 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 3 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 4 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 5 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 6 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 7 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 8 as core 0 on socket 0 00:06:57.628 EAL: Detected lcore 9 as core 0 on socket 0 00:06:57.628 EAL: Maximum logical cores by configuration: 128 00:06:57.628 EAL: Detected CPU lcores: 10 00:06:57.628 EAL: Detected NUMA nodes: 1 00:06:57.628 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:57.628 EAL: Detected shared linkage of DPDK 00:06:57.628 EAL: No shared files mode enabled, IPC will be disabled 00:06:57.628 EAL: Selected IOVA mode 'PA' 00:06:57.628 EAL: Probing VFIO support... 00:06:57.628 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:57.628 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:57.628 EAL: Ask a virtual area of 0x2e000 bytes 00:06:57.628 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:57.628 EAL: Setting up physically contiguous memory... 00:06:57.628 EAL: Setting maximum number of open files to 524288 00:06:57.628 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:57.628 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:57.628 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.628 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:57.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.628 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.628 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:57.628 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:57.628 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.628 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:57.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.628 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.628 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:57.628 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:57.628 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.628 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:57.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.628 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.628 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:57.628 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:57.628 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.628 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:57.628 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.628 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.628 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:57.628 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:57.628 EAL: Hugepages will be freed exactly as allocated. 00:06:57.628 EAL: No shared files mode enabled, IPC is disabled 00:06:57.628 EAL: No shared files mode enabled, IPC is disabled 00:06:57.628 EAL: TSC frequency is ~2290000 KHz 00:06:57.628 EAL: Main lcore 0 is ready (tid=7f99a92d3a40;cpuset=[0]) 00:06:57.628 EAL: Trying to obtain current memory policy. 00:06:57.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.628 EAL: Restoring previous memory policy: 0 00:06:57.628 EAL: request: mp_malloc_sync 00:06:57.628 EAL: No shared files mode enabled, IPC is disabled 00:06:57.628 EAL: Heap on socket 0 was expanded by 2MB 00:06:57.628 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:57.628 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:57.628 EAL: Mem event callback 'spdk:(nil)' registered 00:06:57.628 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:57.888 00:06:57.888 00:06:57.888 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.888 http://cunit.sourceforge.net/ 00:06:57.888 00:06:57.888 00:06:57.888 Suite: components_suite 00:06:58.148 Test: vtophys_malloc_test ...passed 00:06:58.148 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:58.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.148 EAL: Restoring previous memory policy: 4 00:06:58.148 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.148 EAL: request: mp_malloc_sync 00:06:58.148 EAL: No shared files mode enabled, IPC is disabled 00:06:58.148 EAL: Heap on socket 0 was expanded by 4MB 00:06:58.148 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.148 EAL: request: mp_malloc_sync 00:06:58.148 EAL: No shared files mode enabled, IPC is disabled 00:06:58.148 EAL: Heap on socket 0 was shrunk by 4MB 00:06:58.148 EAL: Trying to obtain current memory policy. 00:06:58.148 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.149 EAL: Restoring previous memory policy: 4 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was expanded by 6MB 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was shrunk by 6MB 00:06:58.149 EAL: Trying to obtain current memory policy. 00:06:58.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.149 EAL: Restoring previous memory policy: 4 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was expanded by 10MB 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was shrunk by 10MB 00:06:58.149 EAL: Trying to obtain current memory policy. 00:06:58.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.149 EAL: Restoring previous memory policy: 4 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was expanded by 18MB 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was shrunk by 18MB 00:06:58.149 EAL: Trying to obtain current memory policy. 00:06:58.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.149 EAL: Restoring previous memory policy: 4 00:06:58.149 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.149 EAL: request: mp_malloc_sync 00:06:58.149 EAL: No shared files mode enabled, IPC is disabled 00:06:58.149 EAL: Heap on socket 0 was expanded by 34MB 00:06:58.408 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.408 EAL: request: mp_malloc_sync 00:06:58.408 EAL: No shared files mode enabled, IPC is disabled 00:06:58.408 EAL: Heap on socket 0 was shrunk by 34MB 00:06:58.408 EAL: Trying to obtain current memory policy. 00:06:58.408 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.408 EAL: Restoring previous memory policy: 4 00:06:58.408 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.408 EAL: request: mp_malloc_sync 00:06:58.408 EAL: No shared files mode enabled, IPC is disabled 00:06:58.408 EAL: Heap on socket 0 was expanded by 66MB 00:06:58.408 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.408 EAL: request: mp_malloc_sync 00:06:58.408 EAL: No shared files mode enabled, IPC is disabled 00:06:58.408 EAL: Heap on socket 0 was shrunk by 66MB 00:06:58.668 EAL: Trying to obtain current memory policy. 00:06:58.668 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.668 EAL: Restoring previous memory policy: 4 00:06:58.668 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.668 EAL: request: mp_malloc_sync 00:06:58.668 EAL: No shared files mode enabled, IPC is disabled 00:06:58.668 EAL: Heap on socket 0 was expanded by 130MB 00:06:58.927 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.927 EAL: request: mp_malloc_sync 00:06:58.927 EAL: No shared files mode enabled, IPC is disabled 00:06:58.927 EAL: Heap on socket 0 was shrunk by 130MB 00:06:59.185 EAL: Trying to obtain current memory policy. 00:06:59.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:59.185 EAL: Restoring previous memory policy: 4 00:06:59.185 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.185 EAL: request: mp_malloc_sync 00:06:59.185 EAL: No shared files mode enabled, IPC is disabled 00:06:59.185 EAL: Heap on socket 0 was expanded by 258MB 00:06:59.753 EAL: Calling mem event callback 'spdk:(nil)' 00:06:59.753 EAL: request: mp_malloc_sync 00:06:59.753 EAL: No shared files mode enabled, IPC is disabled 00:06:59.753 EAL: Heap on socket 0 was shrunk by 258MB 00:07:00.012 EAL: Trying to obtain current memory policy. 00:07:00.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.271 EAL: Restoring previous memory policy: 4 00:07:00.271 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.271 EAL: request: mp_malloc_sync 00:07:00.271 EAL: No shared files mode enabled, IPC is disabled 00:07:00.271 EAL: Heap on socket 0 was expanded by 514MB 00:07:01.209 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.209 EAL: request: mp_malloc_sync 00:07:01.209 EAL: No shared files mode enabled, IPC is disabled 00:07:01.209 EAL: Heap on socket 0 was shrunk by 514MB 00:07:02.147 EAL: Trying to obtain current memory policy. 00:07:02.147 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.405 EAL: Restoring previous memory policy: 4 00:07:02.405 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.405 EAL: request: mp_malloc_sync 00:07:02.405 EAL: No shared files mode enabled, IPC is disabled 00:07:02.405 EAL: Heap on socket 0 was expanded by 1026MB 00:07:04.314 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.314 EAL: request: mp_malloc_sync 00:07:04.314 EAL: No shared files mode enabled, IPC is disabled 00:07:04.314 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:06.216 passed 00:07:06.216 00:07:06.216 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.216 suites 1 1 n/a 0 0 00:07:06.216 tests 2 2 2 0 0 00:07:06.216 asserts 5775 5775 5775 0 n/a 00:07:06.216 00:07:06.216 Elapsed time = 8.385 seconds 00:07:06.216 EAL: Calling mem event callback 'spdk:(nil)' 00:07:06.216 EAL: request: mp_malloc_sync 00:07:06.216 EAL: No shared files mode enabled, IPC is disabled 00:07:06.216 EAL: Heap on socket 0 was shrunk by 2MB 00:07:06.216 EAL: No shared files mode enabled, IPC is disabled 00:07:06.216 EAL: No shared files mode enabled, IPC is disabled 00:07:06.216 EAL: No shared files mode enabled, IPC is disabled 00:07:06.216 00:07:06.216 real 0m8.735s 00:07:06.216 user 0m7.751s 00:07:06.216 sys 0m0.816s 00:07:06.216 ************************************ 00:07:06.216 END TEST env_vtophys 00:07:06.216 ************************************ 00:07:06.216 06:16:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.216 06:16:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 06:16:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:06.216 06:16:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.216 06:16:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.216 06:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.216 ************************************ 00:07:06.216 START TEST env_pci 00:07:06.216 ************************************ 00:07:06.216 06:16:50 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:06.474 00:07:06.474 00:07:06.474 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.474 http://cunit.sourceforge.net/ 00:07:06.474 00:07:06.474 00:07:06.474 Suite: pci 00:07:06.474 Test: pci_hook ...[2024-11-26 06:16:50.356266] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57025 has claimed it 00:07:06.474 passed 00:07:06.474 00:07:06.474 EAL: Cannot find device (10000:00:01.0) 00:07:06.474 EAL: Failed to attach device on primary process 00:07:06.474 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.474 suites 1 1 n/a 0 0 00:07:06.474 tests 1 1 1 0 0 00:07:06.474 asserts 25 25 25 0 n/a 00:07:06.474 00:07:06.474 Elapsed time = 0.005 seconds 00:07:06.474 00:07:06.474 real 0m0.086s 00:07:06.474 user 0m0.033s 00:07:06.474 sys 0m0.052s 00:07:06.474 06:16:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.474 06:16:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:06.474 ************************************ 00:07:06.474 END TEST env_pci 00:07:06.474 ************************************ 00:07:06.474 06:16:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:06.474 06:16:50 env -- env/env.sh@15 -- # uname 00:07:06.474 06:16:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:06.474 06:16:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:06.474 06:16:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:06.474 06:16:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:06.474 06:16:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.474 06:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.474 ************************************ 00:07:06.474 START TEST env_dpdk_post_init 00:07:06.474 ************************************ 00:07:06.474 06:16:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:06.474 EAL: Detected CPU lcores: 10 00:07:06.474 EAL: Detected NUMA nodes: 1 00:07:06.474 EAL: Detected shared linkage of DPDK 00:07:06.474 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:06.474 EAL: Selected IOVA mode 'PA' 00:07:06.732 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:06.732 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:06.732 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:06.732 Starting DPDK initialization... 00:07:06.732 Starting SPDK post initialization... 00:07:06.732 SPDK NVMe probe 00:07:06.732 Attaching to 0000:00:10.0 00:07:06.732 Attaching to 0000:00:11.0 00:07:06.732 Attached to 0000:00:10.0 00:07:06.732 Attached to 0000:00:11.0 00:07:06.732 Cleaning up... 00:07:06.732 00:07:06.732 real 0m0.293s 00:07:06.732 user 0m0.100s 00:07:06.732 sys 0m0.093s 00:07:06.732 06:16:50 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.732 ************************************ 00:07:06.732 END TEST env_dpdk_post_init 00:07:06.732 ************************************ 00:07:06.732 06:16:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:06.733 06:16:50 env -- env/env.sh@26 -- # uname 00:07:06.733 06:16:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:06.733 06:16:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:06.733 06:16:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.733 06:16:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.733 06:16:50 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.733 ************************************ 00:07:06.733 START TEST env_mem_callbacks 00:07:06.733 ************************************ 00:07:06.733 06:16:50 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:06.992 EAL: Detected CPU lcores: 10 00:07:06.992 EAL: Detected NUMA nodes: 1 00:07:06.992 EAL: Detected shared linkage of DPDK 00:07:06.992 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:06.992 EAL: Selected IOVA mode 'PA' 00:07:06.992 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:06.992 00:07:06.992 00:07:06.992 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.992 http://cunit.sourceforge.net/ 00:07:06.992 00:07:06.992 00:07:06.992 Suite: memory 00:07:06.992 Test: test ... 00:07:06.992 register 0x200000200000 2097152 00:07:06.992 malloc 3145728 00:07:06.993 register 0x200000400000 4194304 00:07:06.993 buf 0x2000004fffc0 len 3145728 PASSED 00:07:06.993 malloc 64 00:07:06.993 buf 0x2000004ffec0 len 64 PASSED 00:07:06.993 malloc 4194304 00:07:06.993 register 0x200000800000 6291456 00:07:06.993 buf 0x2000009fffc0 len 4194304 PASSED 00:07:06.993 free 0x2000004fffc0 3145728 00:07:06.993 free 0x2000004ffec0 64 00:07:06.993 unregister 0x200000400000 4194304 PASSED 00:07:06.993 free 0x2000009fffc0 4194304 00:07:06.993 unregister 0x200000800000 6291456 PASSED 00:07:06.993 malloc 8388608 00:07:06.993 register 0x200000400000 10485760 00:07:06.993 buf 0x2000005fffc0 len 8388608 PASSED 00:07:06.993 free 0x2000005fffc0 8388608 00:07:06.993 unregister 0x200000400000 10485760 PASSED 00:07:06.993 passed 00:07:06.993 00:07:06.993 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.993 suites 1 1 n/a 0 0 00:07:06.993 tests 1 1 1 0 0 00:07:06.993 asserts 15 15 15 0 n/a 00:07:06.993 00:07:06.993 Elapsed time = 0.081 seconds 00:07:07.253 00:07:07.253 real 0m0.284s 00:07:07.253 user 0m0.109s 00:07:07.253 sys 0m0.071s 00:07:07.253 06:16:51 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.253 06:16:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 ************************************ 00:07:07.253 END TEST env_mem_callbacks 00:07:07.253 ************************************ 00:07:07.253 ************************************ 00:07:07.253 END TEST env 00:07:07.253 ************************************ 00:07:07.253 00:07:07.253 real 0m10.272s 00:07:07.253 user 0m8.487s 00:07:07.253 sys 0m1.412s 00:07:07.253 06:16:51 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.253 06:16:51 env -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 06:16:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:07.253 06:16:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.253 06:16:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.253 06:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:07.253 ************************************ 00:07:07.253 START TEST rpc 00:07:07.253 ************************************ 00:07:07.253 06:16:51 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:07.253 * Looking for test storage... 00:07:07.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.513 06:16:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.513 06:16:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.513 06:16:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.513 06:16:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.513 06:16:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.513 06:16:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:07.513 06:16:51 rpc -- scripts/common.sh@345 -- # : 1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.513 06:16:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.513 06:16:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@353 -- # local d=1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.513 06:16:51 rpc -- scripts/common.sh@355 -- # echo 1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.513 06:16:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@353 -- # local d=2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.513 06:16:51 rpc -- scripts/common.sh@355 -- # echo 2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.513 06:16:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.513 06:16:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.513 06:16:51 rpc -- scripts/common.sh@368 -- # return 0 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.513 --rc genhtml_branch_coverage=1 00:07:07.513 --rc genhtml_function_coverage=1 00:07:07.513 --rc genhtml_legend=1 00:07:07.513 --rc geninfo_all_blocks=1 00:07:07.513 --rc geninfo_unexecuted_blocks=1 00:07:07.513 00:07:07.513 ' 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.513 --rc genhtml_branch_coverage=1 00:07:07.513 --rc genhtml_function_coverage=1 00:07:07.513 --rc genhtml_legend=1 00:07:07.513 --rc geninfo_all_blocks=1 00:07:07.513 --rc geninfo_unexecuted_blocks=1 00:07:07.513 00:07:07.513 ' 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.513 --rc genhtml_branch_coverage=1 00:07:07.513 --rc genhtml_function_coverage=1 00:07:07.513 --rc genhtml_legend=1 00:07:07.513 --rc geninfo_all_blocks=1 00:07:07.513 --rc geninfo_unexecuted_blocks=1 00:07:07.513 00:07:07.513 ' 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.513 --rc genhtml_branch_coverage=1 00:07:07.513 --rc genhtml_function_coverage=1 00:07:07.513 --rc genhtml_legend=1 00:07:07.513 --rc geninfo_all_blocks=1 00:07:07.513 --rc geninfo_unexecuted_blocks=1 00:07:07.513 00:07:07.513 ' 00:07:07.513 06:16:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:07.513 06:16:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57152 00:07:07.513 06:16:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.513 06:16:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57152 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@835 -- # '[' -z 57152 ']' 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.513 06:16:51 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.514 06:16:51 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.514 06:16:51 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.514 06:16:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.514 [2024-11-26 06:16:51.622907] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:07.514 [2024-11-26 06:16:51.623222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57152 ] 00:07:07.774 [2024-11-26 06:16:51.809660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.035 [2024-11-26 06:16:51.932008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:08.035 [2024-11-26 06:16:51.932192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57152' to capture a snapshot of events at runtime. 00:07:08.035 [2024-11-26 06:16:51.932257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.035 [2024-11-26 06:16:51.932392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.035 [2024-11-26 06:16:51.932467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57152 for offline analysis/debug. 00:07:08.035 [2024-11-26 06:16:51.933810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.072 06:16:52 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.072 06:16:52 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:09.072 06:16:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:09.072 06:16:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:09.072 06:16:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:09.072 06:16:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:09.072 06:16:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.072 06:16:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.072 06:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.072 ************************************ 00:07:09.072 START TEST rpc_integrity 00:07:09.072 ************************************ 00:07:09.072 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:09.072 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:09.072 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.072 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.072 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.072 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:09.072 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:09.072 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:09.072 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:09.072 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.072 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.073 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:09.073 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:09.073 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.073 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 06:16:52 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.073 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:09.073 { 00:07:09.073 "name": "Malloc0", 00:07:09.073 "aliases": [ 00:07:09.073 "6b1eaba7-7e73-4cb7-8216-af68b204d919" 00:07:09.073 ], 00:07:09.073 "product_name": "Malloc disk", 00:07:09.073 "block_size": 512, 00:07:09.073 "num_blocks": 16384, 00:07:09.073 "uuid": "6b1eaba7-7e73-4cb7-8216-af68b204d919", 00:07:09.073 "assigned_rate_limits": { 00:07:09.073 "rw_ios_per_sec": 0, 00:07:09.073 "rw_mbytes_per_sec": 0, 00:07:09.073 "r_mbytes_per_sec": 0, 00:07:09.073 "w_mbytes_per_sec": 0 00:07:09.073 }, 00:07:09.073 "claimed": false, 00:07:09.073 "zoned": false, 00:07:09.073 "supported_io_types": { 00:07:09.073 "read": true, 00:07:09.073 "write": true, 00:07:09.073 "unmap": true, 00:07:09.073 "flush": true, 00:07:09.073 "reset": true, 00:07:09.073 "nvme_admin": false, 00:07:09.073 "nvme_io": false, 00:07:09.073 "nvme_io_md": false, 00:07:09.073 "write_zeroes": true, 00:07:09.073 "zcopy": true, 00:07:09.073 "get_zone_info": false, 00:07:09.073 "zone_management": false, 00:07:09.073 "zone_append": false, 00:07:09.073 "compare": false, 00:07:09.073 "compare_and_write": false, 00:07:09.073 "abort": true, 00:07:09.073 "seek_hole": false, 00:07:09.073 "seek_data": false, 00:07:09.073 "copy": true, 00:07:09.073 "nvme_iov_md": false 00:07:09.073 }, 00:07:09.073 "memory_domains": [ 00:07:09.073 { 00:07:09.073 "dma_device_id": "system", 00:07:09.073 "dma_device_type": 1 00:07:09.073 }, 00:07:09.073 { 00:07:09.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.073 "dma_device_type": 2 00:07:09.073 } 00:07:09.073 ], 00:07:09.073 "driver_specific": {} 00:07:09.073 } 00:07:09.073 ]' 00:07:09.073 06:16:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:09.073 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:09.073 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:09.073 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.073 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 [2024-11-26 06:16:53.044134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:09.073 [2024-11-26 06:16:53.044219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.073 [2024-11-26 06:16:53.044265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:09.073 [2024-11-26 06:16:53.044289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.073 [2024-11-26 06:16:53.047155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.073 [2024-11-26 06:16:53.047205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:09.073 Passthru0 00:07:09.073 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.073 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:09.073 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.073 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.073 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.073 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:09.073 { 00:07:09.073 "name": "Malloc0", 00:07:09.073 "aliases": [ 00:07:09.073 "6b1eaba7-7e73-4cb7-8216-af68b204d919" 00:07:09.073 ], 00:07:09.073 "product_name": "Malloc disk", 00:07:09.073 "block_size": 512, 00:07:09.073 "num_blocks": 16384, 00:07:09.073 "uuid": "6b1eaba7-7e73-4cb7-8216-af68b204d919", 00:07:09.073 "assigned_rate_limits": { 00:07:09.073 "rw_ios_per_sec": 0, 00:07:09.073 "rw_mbytes_per_sec": 0, 00:07:09.073 "r_mbytes_per_sec": 0, 00:07:09.073 "w_mbytes_per_sec": 0 00:07:09.073 }, 00:07:09.073 "claimed": true, 00:07:09.073 "claim_type": "exclusive_write", 00:07:09.073 "zoned": false, 00:07:09.073 "supported_io_types": { 00:07:09.073 "read": true, 00:07:09.073 "write": true, 00:07:09.073 "unmap": true, 00:07:09.073 "flush": true, 00:07:09.073 "reset": true, 00:07:09.073 "nvme_admin": false, 00:07:09.073 "nvme_io": false, 00:07:09.073 "nvme_io_md": false, 00:07:09.073 "write_zeroes": true, 00:07:09.073 "zcopy": true, 00:07:09.073 "get_zone_info": false, 00:07:09.073 "zone_management": false, 00:07:09.073 "zone_append": false, 00:07:09.073 "compare": false, 00:07:09.073 "compare_and_write": false, 00:07:09.073 "abort": true, 00:07:09.073 "seek_hole": false, 00:07:09.073 "seek_data": false, 00:07:09.073 "copy": true, 00:07:09.073 "nvme_iov_md": false 00:07:09.073 }, 00:07:09.073 "memory_domains": [ 00:07:09.073 { 00:07:09.073 "dma_device_id": "system", 00:07:09.073 "dma_device_type": 1 00:07:09.073 }, 00:07:09.073 { 00:07:09.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.073 "dma_device_type": 2 00:07:09.073 } 00:07:09.073 ], 00:07:09.073 "driver_specific": {} 00:07:09.073 }, 00:07:09.073 { 00:07:09.073 "name": "Passthru0", 00:07:09.073 "aliases": [ 00:07:09.073 "8d35ed9f-0467-595e-a638-3c1286ae3fca" 00:07:09.073 ], 00:07:09.073 "product_name": "passthru", 00:07:09.073 "block_size": 512, 00:07:09.073 "num_blocks": 16384, 00:07:09.073 "uuid": "8d35ed9f-0467-595e-a638-3c1286ae3fca", 00:07:09.073 "assigned_rate_limits": { 00:07:09.073 "rw_ios_per_sec": 0, 00:07:09.073 "rw_mbytes_per_sec": 0, 00:07:09.073 "r_mbytes_per_sec": 0, 00:07:09.073 "w_mbytes_per_sec": 0 00:07:09.073 }, 00:07:09.073 "claimed": false, 00:07:09.073 "zoned": false, 00:07:09.073 "supported_io_types": { 00:07:09.073 "read": true, 00:07:09.073 "write": true, 00:07:09.073 "unmap": true, 00:07:09.073 "flush": true, 00:07:09.073 "reset": true, 00:07:09.073 "nvme_admin": false, 00:07:09.073 "nvme_io": false, 00:07:09.073 "nvme_io_md": false, 00:07:09.073 "write_zeroes": true, 00:07:09.073 "zcopy": true, 00:07:09.073 "get_zone_info": false, 00:07:09.073 "zone_management": false, 00:07:09.073 "zone_append": false, 00:07:09.073 "compare": false, 00:07:09.073 "compare_and_write": false, 00:07:09.073 "abort": true, 00:07:09.073 "seek_hole": false, 00:07:09.073 "seek_data": false, 00:07:09.073 "copy": true, 00:07:09.073 "nvme_iov_md": false 00:07:09.073 }, 00:07:09.073 "memory_domains": [ 00:07:09.073 { 00:07:09.073 "dma_device_id": "system", 00:07:09.073 "dma_device_type": 1 00:07:09.073 }, 00:07:09.073 { 00:07:09.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.074 "dma_device_type": 2 00:07:09.074 } 00:07:09.074 ], 00:07:09.074 "driver_specific": { 00:07:09.074 "passthru": { 00:07:09.074 "name": "Passthru0", 00:07:09.074 "base_bdev_name": "Malloc0" 00:07:09.074 } 00:07:09.074 } 00:07:09.074 } 00:07:09.074 ]' 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.074 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:09.074 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:09.334 ************************************ 00:07:09.334 END TEST rpc_integrity 00:07:09.334 ************************************ 00:07:09.334 06:16:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:09.334 00:07:09.334 real 0m0.357s 00:07:09.334 user 0m0.193s 00:07:09.334 sys 0m0.047s 00:07:09.334 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.334 06:16:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.334 06:16:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:09.334 06:16:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.334 06:16:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.334 06:16:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.334 ************************************ 00:07:09.334 START TEST rpc_plugins 00:07:09.334 ************************************ 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:09.334 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.334 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:09.334 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:09.334 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.334 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:09.334 { 00:07:09.334 "name": "Malloc1", 00:07:09.334 "aliases": [ 00:07:09.334 "825ef7e9-0e17-4216-97fe-e494ebbe5734" 00:07:09.334 ], 00:07:09.334 "product_name": "Malloc disk", 00:07:09.334 "block_size": 4096, 00:07:09.334 "num_blocks": 256, 00:07:09.334 "uuid": "825ef7e9-0e17-4216-97fe-e494ebbe5734", 00:07:09.334 "assigned_rate_limits": { 00:07:09.334 "rw_ios_per_sec": 0, 00:07:09.334 "rw_mbytes_per_sec": 0, 00:07:09.334 "r_mbytes_per_sec": 0, 00:07:09.334 "w_mbytes_per_sec": 0 00:07:09.334 }, 00:07:09.334 "claimed": false, 00:07:09.334 "zoned": false, 00:07:09.334 "supported_io_types": { 00:07:09.334 "read": true, 00:07:09.334 "write": true, 00:07:09.334 "unmap": true, 00:07:09.334 "flush": true, 00:07:09.334 "reset": true, 00:07:09.334 "nvme_admin": false, 00:07:09.334 "nvme_io": false, 00:07:09.334 "nvme_io_md": false, 00:07:09.334 "write_zeroes": true, 00:07:09.334 "zcopy": true, 00:07:09.334 "get_zone_info": false, 00:07:09.335 "zone_management": false, 00:07:09.335 "zone_append": false, 00:07:09.335 "compare": false, 00:07:09.335 "compare_and_write": false, 00:07:09.335 "abort": true, 00:07:09.335 "seek_hole": false, 00:07:09.335 "seek_data": false, 00:07:09.335 "copy": true, 00:07:09.335 "nvme_iov_md": false 00:07:09.335 }, 00:07:09.335 "memory_domains": [ 00:07:09.335 { 00:07:09.335 "dma_device_id": "system", 00:07:09.335 "dma_device_type": 1 00:07:09.335 }, 00:07:09.335 { 00:07:09.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.335 "dma_device_type": 2 00:07:09.335 } 00:07:09.335 ], 00:07:09.335 "driver_specific": {} 00:07:09.335 } 00:07:09.335 ]' 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:09.335 ************************************ 00:07:09.335 END TEST rpc_plugins 00:07:09.335 ************************************ 00:07:09.335 06:16:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:09.335 00:07:09.335 real 0m0.170s 00:07:09.335 user 0m0.097s 00:07:09.335 sys 0m0.024s 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.335 06:16:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:09.594 06:16:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:09.594 06:16:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.594 06:16:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.594 06:16:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.594 ************************************ 00:07:09.594 START TEST rpc_trace_cmd_test 00:07:09.594 ************************************ 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:09.595 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57152", 00:07:09.595 "tpoint_group_mask": "0x8", 00:07:09.595 "iscsi_conn": { 00:07:09.595 "mask": "0x2", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "scsi": { 00:07:09.595 "mask": "0x4", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "bdev": { 00:07:09.595 "mask": "0x8", 00:07:09.595 "tpoint_mask": "0xffffffffffffffff" 00:07:09.595 }, 00:07:09.595 "nvmf_rdma": { 00:07:09.595 "mask": "0x10", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "nvmf_tcp": { 00:07:09.595 "mask": "0x20", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "ftl": { 00:07:09.595 "mask": "0x40", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "blobfs": { 00:07:09.595 "mask": "0x80", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "dsa": { 00:07:09.595 "mask": "0x200", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "thread": { 00:07:09.595 "mask": "0x400", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "nvme_pcie": { 00:07:09.595 "mask": "0x800", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "iaa": { 00:07:09.595 "mask": "0x1000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "nvme_tcp": { 00:07:09.595 "mask": "0x2000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "bdev_nvme": { 00:07:09.595 "mask": "0x4000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "sock": { 00:07:09.595 "mask": "0x8000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "blob": { 00:07:09.595 "mask": "0x10000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "bdev_raid": { 00:07:09.595 "mask": "0x20000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 }, 00:07:09.595 "scheduler": { 00:07:09.595 "mask": "0x40000", 00:07:09.595 "tpoint_mask": "0x0" 00:07:09.595 } 00:07:09.595 }' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:09.595 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:09.854 ************************************ 00:07:09.854 END TEST rpc_trace_cmd_test 00:07:09.855 ************************************ 00:07:09.855 06:16:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:09.855 00:07:09.855 real 0m0.247s 00:07:09.855 user 0m0.188s 00:07:09.855 sys 0m0.048s 00:07:09.855 06:16:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.855 06:16:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 06:16:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:09.855 06:16:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:09.855 06:16:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:09.855 06:16:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.855 06:16:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.855 06:16:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 ************************************ 00:07:09.855 START TEST rpc_daemon_integrity 00:07:09.855 ************************************ 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:09.855 { 00:07:09.855 "name": "Malloc2", 00:07:09.855 "aliases": [ 00:07:09.855 "01619816-d3e1-46b7-8753-3090b0a04ef1" 00:07:09.855 ], 00:07:09.855 "product_name": "Malloc disk", 00:07:09.855 "block_size": 512, 00:07:09.855 "num_blocks": 16384, 00:07:09.855 "uuid": "01619816-d3e1-46b7-8753-3090b0a04ef1", 00:07:09.855 "assigned_rate_limits": { 00:07:09.855 "rw_ios_per_sec": 0, 00:07:09.855 "rw_mbytes_per_sec": 0, 00:07:09.855 "r_mbytes_per_sec": 0, 00:07:09.855 "w_mbytes_per_sec": 0 00:07:09.855 }, 00:07:09.855 "claimed": false, 00:07:09.855 "zoned": false, 00:07:09.855 "supported_io_types": { 00:07:09.855 "read": true, 00:07:09.855 "write": true, 00:07:09.855 "unmap": true, 00:07:09.855 "flush": true, 00:07:09.855 "reset": true, 00:07:09.855 "nvme_admin": false, 00:07:09.855 "nvme_io": false, 00:07:09.855 "nvme_io_md": false, 00:07:09.855 "write_zeroes": true, 00:07:09.855 "zcopy": true, 00:07:09.855 "get_zone_info": false, 00:07:09.855 "zone_management": false, 00:07:09.855 "zone_append": false, 00:07:09.855 "compare": false, 00:07:09.855 "compare_and_write": false, 00:07:09.855 "abort": true, 00:07:09.855 "seek_hole": false, 00:07:09.855 "seek_data": false, 00:07:09.855 "copy": true, 00:07:09.855 "nvme_iov_md": false 00:07:09.855 }, 00:07:09.855 "memory_domains": [ 00:07:09.855 { 00:07:09.855 "dma_device_id": "system", 00:07:09.855 "dma_device_type": 1 00:07:09.855 }, 00:07:09.855 { 00:07:09.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.855 "dma_device_type": 2 00:07:09.855 } 00:07:09.855 ], 00:07:09.855 "driver_specific": {} 00:07:09.855 } 00:07:09.855 ]' 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.855 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.115 [2024-11-26 06:16:53.988679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:10.115 [2024-11-26 06:16:53.988761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.115 [2024-11-26 06:16:53.988787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:10.115 [2024-11-26 06:16:53.988799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.115 [2024-11-26 06:16:53.991235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.115 [2024-11-26 06:16:53.991359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:10.115 Passthru0 00:07:10.115 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.115 06:16:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:10.115 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.115 06:16:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.115 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.115 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:10.115 { 00:07:10.115 "name": "Malloc2", 00:07:10.115 "aliases": [ 00:07:10.115 "01619816-d3e1-46b7-8753-3090b0a04ef1" 00:07:10.115 ], 00:07:10.115 "product_name": "Malloc disk", 00:07:10.115 "block_size": 512, 00:07:10.115 "num_blocks": 16384, 00:07:10.115 "uuid": "01619816-d3e1-46b7-8753-3090b0a04ef1", 00:07:10.115 "assigned_rate_limits": { 00:07:10.115 "rw_ios_per_sec": 0, 00:07:10.115 "rw_mbytes_per_sec": 0, 00:07:10.115 "r_mbytes_per_sec": 0, 00:07:10.115 "w_mbytes_per_sec": 0 00:07:10.115 }, 00:07:10.115 "claimed": true, 00:07:10.115 "claim_type": "exclusive_write", 00:07:10.115 "zoned": false, 00:07:10.115 "supported_io_types": { 00:07:10.115 "read": true, 00:07:10.115 "write": true, 00:07:10.115 "unmap": true, 00:07:10.115 "flush": true, 00:07:10.115 "reset": true, 00:07:10.115 "nvme_admin": false, 00:07:10.115 "nvme_io": false, 00:07:10.115 "nvme_io_md": false, 00:07:10.115 "write_zeroes": true, 00:07:10.116 "zcopy": true, 00:07:10.116 "get_zone_info": false, 00:07:10.116 "zone_management": false, 00:07:10.116 "zone_append": false, 00:07:10.116 "compare": false, 00:07:10.116 "compare_and_write": false, 00:07:10.116 "abort": true, 00:07:10.116 "seek_hole": false, 00:07:10.116 "seek_data": false, 00:07:10.116 "copy": true, 00:07:10.116 "nvme_iov_md": false 00:07:10.116 }, 00:07:10.116 "memory_domains": [ 00:07:10.116 { 00:07:10.116 "dma_device_id": "system", 00:07:10.116 "dma_device_type": 1 00:07:10.116 }, 00:07:10.116 { 00:07:10.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.116 "dma_device_type": 2 00:07:10.116 } 00:07:10.116 ], 00:07:10.116 "driver_specific": {} 00:07:10.116 }, 00:07:10.116 { 00:07:10.116 "name": "Passthru0", 00:07:10.116 "aliases": [ 00:07:10.116 "a34511cd-fa70-5d22-8e51-2fba68ffeb8f" 00:07:10.116 ], 00:07:10.116 "product_name": "passthru", 00:07:10.116 "block_size": 512, 00:07:10.116 "num_blocks": 16384, 00:07:10.116 "uuid": "a34511cd-fa70-5d22-8e51-2fba68ffeb8f", 00:07:10.116 "assigned_rate_limits": { 00:07:10.116 "rw_ios_per_sec": 0, 00:07:10.116 "rw_mbytes_per_sec": 0, 00:07:10.116 "r_mbytes_per_sec": 0, 00:07:10.116 "w_mbytes_per_sec": 0 00:07:10.116 }, 00:07:10.116 "claimed": false, 00:07:10.116 "zoned": false, 00:07:10.116 "supported_io_types": { 00:07:10.116 "read": true, 00:07:10.116 "write": true, 00:07:10.116 "unmap": true, 00:07:10.116 "flush": true, 00:07:10.116 "reset": true, 00:07:10.116 "nvme_admin": false, 00:07:10.116 "nvme_io": false, 00:07:10.116 "nvme_io_md": false, 00:07:10.116 "write_zeroes": true, 00:07:10.116 "zcopy": true, 00:07:10.116 "get_zone_info": false, 00:07:10.116 "zone_management": false, 00:07:10.116 "zone_append": false, 00:07:10.116 "compare": false, 00:07:10.116 "compare_and_write": false, 00:07:10.116 "abort": true, 00:07:10.116 "seek_hole": false, 00:07:10.116 "seek_data": false, 00:07:10.116 "copy": true, 00:07:10.116 "nvme_iov_md": false 00:07:10.116 }, 00:07:10.116 "memory_domains": [ 00:07:10.116 { 00:07:10.116 "dma_device_id": "system", 00:07:10.116 "dma_device_type": 1 00:07:10.116 }, 00:07:10.116 { 00:07:10.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.116 "dma_device_type": 2 00:07:10.116 } 00:07:10.116 ], 00:07:10.116 "driver_specific": { 00:07:10.116 "passthru": { 00:07:10.116 "name": "Passthru0", 00:07:10.116 "base_bdev_name": "Malloc2" 00:07:10.116 } 00:07:10.116 } 00:07:10.116 } 00:07:10.116 ]' 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:10.116 ************************************ 00:07:10.116 END TEST rpc_daemon_integrity 00:07:10.116 ************************************ 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:10.116 00:07:10.116 real 0m0.357s 00:07:10.116 user 0m0.196s 00:07:10.116 sys 0m0.055s 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.116 06:16:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:10.116 06:16:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:10.116 06:16:54 rpc -- rpc/rpc.sh@84 -- # killprocess 57152 00:07:10.116 06:16:54 rpc -- common/autotest_common.sh@954 -- # '[' -z 57152 ']' 00:07:10.116 06:16:54 rpc -- common/autotest_common.sh@958 -- # kill -0 57152 00:07:10.116 06:16:54 rpc -- common/autotest_common.sh@959 -- # uname 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57152 00:07:10.375 killing process with pid 57152 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57152' 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@973 -- # kill 57152 00:07:10.375 06:16:54 rpc -- common/autotest_common.sh@978 -- # wait 57152 00:07:12.991 00:07:12.991 real 0m5.582s 00:07:12.991 user 0m6.103s 00:07:12.991 sys 0m0.969s 00:07:12.991 ************************************ 00:07:12.991 END TEST rpc 00:07:12.991 ************************************ 00:07:12.991 06:16:56 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.991 06:16:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.991 06:16:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:12.991 06:16:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.991 06:16:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.991 06:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:12.991 ************************************ 00:07:12.991 START TEST skip_rpc 00:07:12.991 ************************************ 00:07:12.991 06:16:56 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:12.991 * Looking for test storage... 00:07:12.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:12.991 06:16:57 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:12.991 06:16:57 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:12.991 06:16:57 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:12.991 06:16:57 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.991 06:16:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:12.992 06:16:57 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.251 06:16:57 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.252 06:16:57 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.252 --rc genhtml_branch_coverage=1 00:07:13.252 --rc genhtml_function_coverage=1 00:07:13.252 --rc genhtml_legend=1 00:07:13.252 --rc geninfo_all_blocks=1 00:07:13.252 --rc geninfo_unexecuted_blocks=1 00:07:13.252 00:07:13.252 ' 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.252 --rc genhtml_branch_coverage=1 00:07:13.252 --rc genhtml_function_coverage=1 00:07:13.252 --rc genhtml_legend=1 00:07:13.252 --rc geninfo_all_blocks=1 00:07:13.252 --rc geninfo_unexecuted_blocks=1 00:07:13.252 00:07:13.252 ' 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.252 --rc genhtml_branch_coverage=1 00:07:13.252 --rc genhtml_function_coverage=1 00:07:13.252 --rc genhtml_legend=1 00:07:13.252 --rc geninfo_all_blocks=1 00:07:13.252 --rc geninfo_unexecuted_blocks=1 00:07:13.252 00:07:13.252 ' 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.252 --rc genhtml_branch_coverage=1 00:07:13.252 --rc genhtml_function_coverage=1 00:07:13.252 --rc genhtml_legend=1 00:07:13.252 --rc geninfo_all_blocks=1 00:07:13.252 --rc geninfo_unexecuted_blocks=1 00:07:13.252 00:07:13.252 ' 00:07:13.252 06:16:57 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:13.252 06:16:57 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:13.252 06:16:57 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.252 06:16:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.252 ************************************ 00:07:13.252 START TEST skip_rpc 00:07:13.252 ************************************ 00:07:13.252 06:16:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:13.252 06:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57392 00:07:13.252 06:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:13.252 06:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.252 06:16:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:13.252 [2024-11-26 06:16:57.262460] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:13.252 [2024-11-26 06:16:57.262610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57392 ] 00:07:13.510 [2024-11-26 06:16:57.438087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.510 [2024-11-26 06:16:57.559255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57392 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57392 ']' 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57392 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57392 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57392' 00:07:18.827 killing process with pid 57392 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57392 00:07:18.827 06:17:02 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57392 00:07:20.742 00:07:20.742 real 0m7.618s 00:07:20.742 user 0m7.111s 00:07:20.742 sys 0m0.411s 00:07:20.742 06:17:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.742 06:17:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.742 ************************************ 00:07:20.742 END TEST skip_rpc 00:07:20.742 ************************************ 00:07:20.742 06:17:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:20.742 06:17:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.742 06:17:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.742 06:17:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.742 ************************************ 00:07:20.742 START TEST skip_rpc_with_json 00:07:20.742 ************************************ 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57496 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57496 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57496 ']' 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.742 06:17:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:21.001 [2024-11-26 06:17:04.957413] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:21.002 [2024-11-26 06:17:04.958165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57496 ] 00:07:21.002 [2024-11-26 06:17:05.130711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.261 [2024-11-26 06:17:05.254459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:22.208 [2024-11-26 06:17:06.138527] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:22.208 request: 00:07:22.208 { 00:07:22.208 "trtype": "tcp", 00:07:22.208 "method": "nvmf_get_transports", 00:07:22.208 "req_id": 1 00:07:22.208 } 00:07:22.208 Got JSON-RPC error response 00:07:22.208 response: 00:07:22.208 { 00:07:22.208 "code": -19, 00:07:22.208 "message": "No such device" 00:07:22.208 } 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:22.208 [2024-11-26 06:17:06.150641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.208 06:17:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:22.208 { 00:07:22.208 "subsystems": [ 00:07:22.208 { 00:07:22.208 "subsystem": "fsdev", 00:07:22.208 "config": [ 00:07:22.208 { 00:07:22.208 "method": "fsdev_set_opts", 00:07:22.208 "params": { 00:07:22.208 "fsdev_io_pool_size": 65535, 00:07:22.208 "fsdev_io_cache_size": 256 00:07:22.208 } 00:07:22.208 } 00:07:22.208 ] 00:07:22.208 }, 00:07:22.208 { 00:07:22.208 "subsystem": "keyring", 00:07:22.208 "config": [] 00:07:22.208 }, 00:07:22.208 { 00:07:22.208 "subsystem": "iobuf", 00:07:22.208 "config": [ 00:07:22.208 { 00:07:22.208 "method": "iobuf_set_options", 00:07:22.208 "params": { 00:07:22.208 "small_pool_count": 8192, 00:07:22.208 "large_pool_count": 1024, 00:07:22.208 "small_bufsize": 8192, 00:07:22.208 "large_bufsize": 135168, 00:07:22.208 "enable_numa": false 00:07:22.208 } 00:07:22.208 } 00:07:22.208 ] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "sock", 00:07:22.209 "config": [ 00:07:22.209 { 00:07:22.209 "method": "sock_set_default_impl", 00:07:22.209 "params": { 00:07:22.209 "impl_name": "posix" 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "sock_impl_set_options", 00:07:22.209 "params": { 00:07:22.209 "impl_name": "ssl", 00:07:22.209 "recv_buf_size": 4096, 00:07:22.209 "send_buf_size": 4096, 00:07:22.209 "enable_recv_pipe": true, 00:07:22.209 "enable_quickack": false, 00:07:22.209 "enable_placement_id": 0, 00:07:22.209 "enable_zerocopy_send_server": true, 00:07:22.209 "enable_zerocopy_send_client": false, 00:07:22.209 "zerocopy_threshold": 0, 00:07:22.209 "tls_version": 0, 00:07:22.209 "enable_ktls": false 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "sock_impl_set_options", 00:07:22.209 "params": { 00:07:22.209 "impl_name": "posix", 00:07:22.209 "recv_buf_size": 2097152, 00:07:22.209 "send_buf_size": 2097152, 00:07:22.209 "enable_recv_pipe": true, 00:07:22.209 "enable_quickack": false, 00:07:22.209 "enable_placement_id": 0, 00:07:22.209 "enable_zerocopy_send_server": true, 00:07:22.209 "enable_zerocopy_send_client": false, 00:07:22.209 "zerocopy_threshold": 0, 00:07:22.209 "tls_version": 0, 00:07:22.209 "enable_ktls": false 00:07:22.209 } 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "vmd", 00:07:22.209 "config": [] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "accel", 00:07:22.209 "config": [ 00:07:22.209 { 00:07:22.209 "method": "accel_set_options", 00:07:22.209 "params": { 00:07:22.209 "small_cache_size": 128, 00:07:22.209 "large_cache_size": 16, 00:07:22.209 "task_count": 2048, 00:07:22.209 "sequence_count": 2048, 00:07:22.209 "buf_count": 2048 00:07:22.209 } 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "bdev", 00:07:22.209 "config": [ 00:07:22.209 { 00:07:22.209 "method": "bdev_set_options", 00:07:22.209 "params": { 00:07:22.209 "bdev_io_pool_size": 65535, 00:07:22.209 "bdev_io_cache_size": 256, 00:07:22.209 "bdev_auto_examine": true, 00:07:22.209 "iobuf_small_cache_size": 128, 00:07:22.209 "iobuf_large_cache_size": 16 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "bdev_raid_set_options", 00:07:22.209 "params": { 00:07:22.209 "process_window_size_kb": 1024, 00:07:22.209 "process_max_bandwidth_mb_sec": 0 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "bdev_iscsi_set_options", 00:07:22.209 "params": { 00:07:22.209 "timeout_sec": 30 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "bdev_nvme_set_options", 00:07:22.209 "params": { 00:07:22.209 "action_on_timeout": "none", 00:07:22.209 "timeout_us": 0, 00:07:22.209 "timeout_admin_us": 0, 00:07:22.209 "keep_alive_timeout_ms": 10000, 00:07:22.209 "arbitration_burst": 0, 00:07:22.209 "low_priority_weight": 0, 00:07:22.209 "medium_priority_weight": 0, 00:07:22.209 "high_priority_weight": 0, 00:07:22.209 "nvme_adminq_poll_period_us": 10000, 00:07:22.209 "nvme_ioq_poll_period_us": 0, 00:07:22.209 "io_queue_requests": 0, 00:07:22.209 "delay_cmd_submit": true, 00:07:22.209 "transport_retry_count": 4, 00:07:22.209 "bdev_retry_count": 3, 00:07:22.209 "transport_ack_timeout": 0, 00:07:22.209 "ctrlr_loss_timeout_sec": 0, 00:07:22.209 "reconnect_delay_sec": 0, 00:07:22.209 "fast_io_fail_timeout_sec": 0, 00:07:22.209 "disable_auto_failback": false, 00:07:22.209 "generate_uuids": false, 00:07:22.209 "transport_tos": 0, 00:07:22.209 "nvme_error_stat": false, 00:07:22.209 "rdma_srq_size": 0, 00:07:22.209 "io_path_stat": false, 00:07:22.209 "allow_accel_sequence": false, 00:07:22.209 "rdma_max_cq_size": 0, 00:07:22.209 "rdma_cm_event_timeout_ms": 0, 00:07:22.209 "dhchap_digests": [ 00:07:22.209 "sha256", 00:07:22.209 "sha384", 00:07:22.209 "sha512" 00:07:22.209 ], 00:07:22.209 "dhchap_dhgroups": [ 00:07:22.209 "null", 00:07:22.209 "ffdhe2048", 00:07:22.209 "ffdhe3072", 00:07:22.209 "ffdhe4096", 00:07:22.209 "ffdhe6144", 00:07:22.209 "ffdhe8192" 00:07:22.209 ] 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "bdev_nvme_set_hotplug", 00:07:22.209 "params": { 00:07:22.209 "period_us": 100000, 00:07:22.209 "enable": false 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "bdev_wait_for_examine" 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "scsi", 00:07:22.209 "config": null 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "scheduler", 00:07:22.209 "config": [ 00:07:22.209 { 00:07:22.209 "method": "framework_set_scheduler", 00:07:22.209 "params": { 00:07:22.209 "name": "static" 00:07:22.209 } 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "vhost_scsi", 00:07:22.209 "config": [] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "vhost_blk", 00:07:22.209 "config": [] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "ublk", 00:07:22.209 "config": [] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "nbd", 00:07:22.209 "config": [] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "nvmf", 00:07:22.209 "config": [ 00:07:22.209 { 00:07:22.209 "method": "nvmf_set_config", 00:07:22.209 "params": { 00:07:22.209 "discovery_filter": "match_any", 00:07:22.209 "admin_cmd_passthru": { 00:07:22.209 "identify_ctrlr": false 00:07:22.209 }, 00:07:22.209 "dhchap_digests": [ 00:07:22.209 "sha256", 00:07:22.209 "sha384", 00:07:22.209 "sha512" 00:07:22.209 ], 00:07:22.209 "dhchap_dhgroups": [ 00:07:22.209 "null", 00:07:22.209 "ffdhe2048", 00:07:22.209 "ffdhe3072", 00:07:22.209 "ffdhe4096", 00:07:22.209 "ffdhe6144", 00:07:22.209 "ffdhe8192" 00:07:22.209 ] 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "nvmf_set_max_subsystems", 00:07:22.209 "params": { 00:07:22.209 "max_subsystems": 1024 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "nvmf_set_crdt", 00:07:22.209 "params": { 00:07:22.209 "crdt1": 0, 00:07:22.209 "crdt2": 0, 00:07:22.209 "crdt3": 0 00:07:22.209 } 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "method": "nvmf_create_transport", 00:07:22.209 "params": { 00:07:22.209 "trtype": "TCP", 00:07:22.209 "max_queue_depth": 128, 00:07:22.209 "max_io_qpairs_per_ctrlr": 127, 00:07:22.209 "in_capsule_data_size": 4096, 00:07:22.209 "max_io_size": 131072, 00:07:22.209 "io_unit_size": 131072, 00:07:22.209 "max_aq_depth": 128, 00:07:22.209 "num_shared_buffers": 511, 00:07:22.209 "buf_cache_size": 4294967295, 00:07:22.209 "dif_insert_or_strip": false, 00:07:22.209 "zcopy": false, 00:07:22.209 "c2h_success": true, 00:07:22.209 "sock_priority": 0, 00:07:22.209 "abort_timeout_sec": 1, 00:07:22.209 "ack_timeout": 0, 00:07:22.209 "data_wr_pool_size": 0 00:07:22.209 } 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 }, 00:07:22.209 { 00:07:22.209 "subsystem": "iscsi", 00:07:22.209 "config": [ 00:07:22.209 { 00:07:22.209 "method": "iscsi_set_options", 00:07:22.209 "params": { 00:07:22.209 "node_base": "iqn.2016-06.io.spdk", 00:07:22.209 "max_sessions": 128, 00:07:22.209 "max_connections_per_session": 2, 00:07:22.209 "max_queue_depth": 64, 00:07:22.209 "default_time2wait": 2, 00:07:22.209 "default_time2retain": 20, 00:07:22.209 "first_burst_length": 8192, 00:07:22.209 "immediate_data": true, 00:07:22.209 "allow_duplicated_isid": false, 00:07:22.209 "error_recovery_level": 0, 00:07:22.209 "nop_timeout": 60, 00:07:22.209 "nop_in_interval": 30, 00:07:22.209 "disable_chap": false, 00:07:22.209 "require_chap": false, 00:07:22.209 "mutual_chap": false, 00:07:22.209 "chap_group": 0, 00:07:22.209 "max_large_datain_per_connection": 64, 00:07:22.209 "max_r2t_per_connection": 4, 00:07:22.209 "pdu_pool_size": 36864, 00:07:22.209 "immediate_data_pool_size": 16384, 00:07:22.209 "data_out_pool_size": 2048 00:07:22.209 } 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 } 00:07:22.209 ] 00:07:22.209 } 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57496 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57496 ']' 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57496 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.209 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57496 00:07:22.469 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.469 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.469 killing process with pid 57496 00:07:22.469 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57496' 00:07:22.469 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57496 00:07:22.469 06:17:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57496 00:07:25.008 06:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57552 00:07:25.008 06:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:25.008 06:17:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57552 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57552 ']' 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57552 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57552 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.309 killing process with pid 57552 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57552' 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57552 00:07:30.309 06:17:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57552 00:07:32.279 06:17:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:32.279 06:17:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:32.279 00:07:32.279 real 0m11.571s 00:07:32.279 user 0m11.026s 00:07:32.279 sys 0m0.844s 00:07:32.279 06:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.279 06:17:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:32.279 ************************************ 00:07:32.279 END TEST skip_rpc_with_json 00:07:32.279 ************************************ 00:07:32.539 06:17:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:32.539 06:17:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.539 06:17:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.539 06:17:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.539 ************************************ 00:07:32.539 START TEST skip_rpc_with_delay 00:07:32.539 ************************************ 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:32.539 [2024-11-26 06:17:16.583722] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:32.539 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.540 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.540 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.540 00:07:32.540 real 0m0.171s 00:07:32.540 user 0m0.094s 00:07:32.540 sys 0m0.074s 00:07:32.540 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.540 ************************************ 00:07:32.540 END TEST skip_rpc_with_delay 00:07:32.540 ************************************ 00:07:32.540 06:17:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:32.800 06:17:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:32.800 06:17:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:32.800 06:17:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:32.800 06:17:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.800 06:17:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.800 06:17:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.800 ************************************ 00:07:32.800 START TEST exit_on_failed_rpc_init 00:07:32.800 ************************************ 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57686 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57686 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57686 ']' 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.800 06:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:32.800 [2024-11-26 06:17:16.823881] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:32.800 [2024-11-26 06:17:16.824121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57686 ] 00:07:33.060 [2024-11-26 06:17:16.995346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.060 [2024-11-26 06:17:17.110808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:33.997 06:17:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:33.997 [2024-11-26 06:17:18.070231] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:33.997 [2024-11-26 06:17:18.070425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57708 ] 00:07:34.256 [2024-11-26 06:17:18.245950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.515 [2024-11-26 06:17:18.393586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.515 [2024-11-26 06:17:18.393820] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:34.515 [2024-11-26 06:17:18.393885] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:34.515 [2024-11-26 06:17:18.393929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57686 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57686 ']' 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57686 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57686 00:07:34.775 killing process with pid 57686 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57686' 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57686 00:07:34.775 06:17:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57686 00:07:37.321 00:07:37.321 real 0m4.407s 00:07:37.321 user 0m4.754s 00:07:37.321 sys 0m0.589s 00:07:37.321 06:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.321 06:17:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:37.321 ************************************ 00:07:37.321 END TEST exit_on_failed_rpc_init 00:07:37.321 ************************************ 00:07:37.322 06:17:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:37.322 ************************************ 00:07:37.322 END TEST skip_rpc 00:07:37.322 ************************************ 00:07:37.322 00:07:37.322 real 0m24.288s 00:07:37.322 user 0m23.202s 00:07:37.322 sys 0m2.237s 00:07:37.322 06:17:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.322 06:17:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.322 06:17:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:37.322 06:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.322 06:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.322 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:07:37.322 ************************************ 00:07:37.322 START TEST rpc_client 00:07:37.322 ************************************ 00:07:37.322 06:17:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:37.322 * Looking for test storage... 00:07:37.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:37.322 06:17:21 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.322 06:17:21 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.322 06:17:21 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.580 06:17:21 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.581 06:17:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.581 --rc genhtml_branch_coverage=1 00:07:37.581 --rc genhtml_function_coverage=1 00:07:37.581 --rc genhtml_legend=1 00:07:37.581 --rc geninfo_all_blocks=1 00:07:37.581 --rc geninfo_unexecuted_blocks=1 00:07:37.581 00:07:37.581 ' 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.581 --rc genhtml_branch_coverage=1 00:07:37.581 --rc genhtml_function_coverage=1 00:07:37.581 --rc genhtml_legend=1 00:07:37.581 --rc geninfo_all_blocks=1 00:07:37.581 --rc geninfo_unexecuted_blocks=1 00:07:37.581 00:07:37.581 ' 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.581 --rc genhtml_branch_coverage=1 00:07:37.581 --rc genhtml_function_coverage=1 00:07:37.581 --rc genhtml_legend=1 00:07:37.581 --rc geninfo_all_blocks=1 00:07:37.581 --rc geninfo_unexecuted_blocks=1 00:07:37.581 00:07:37.581 ' 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.581 --rc genhtml_branch_coverage=1 00:07:37.581 --rc genhtml_function_coverage=1 00:07:37.581 --rc genhtml_legend=1 00:07:37.581 --rc geninfo_all_blocks=1 00:07:37.581 --rc geninfo_unexecuted_blocks=1 00:07:37.581 00:07:37.581 ' 00:07:37.581 06:17:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:37.581 OK 00:07:37.581 06:17:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:37.581 00:07:37.581 real 0m0.316s 00:07:37.581 user 0m0.171s 00:07:37.581 sys 0m0.160s 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.581 06:17:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:37.581 ************************************ 00:07:37.581 END TEST rpc_client 00:07:37.581 ************************************ 00:07:37.581 06:17:21 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:37.581 06:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.581 06:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.581 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:07:37.581 ************************************ 00:07:37.581 START TEST json_config 00:07:37.581 ************************************ 00:07:37.581 06:17:21 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.841 06:17:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.841 06:17:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.841 06:17:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.841 06:17:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.841 06:17:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.841 06:17:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:37.841 06:17:21 json_config -- scripts/common.sh@345 -- # : 1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.841 06:17:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.841 06:17:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@353 -- # local d=1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.841 06:17:21 json_config -- scripts/common.sh@355 -- # echo 1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.841 06:17:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@353 -- # local d=2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.841 06:17:21 json_config -- scripts/common.sh@355 -- # echo 2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.841 06:17:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.841 06:17:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.841 06:17:21 json_config -- scripts/common.sh@368 -- # return 0 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.841 --rc genhtml_branch_coverage=1 00:07:37.841 --rc genhtml_function_coverage=1 00:07:37.841 --rc genhtml_legend=1 00:07:37.841 --rc geninfo_all_blocks=1 00:07:37.841 --rc geninfo_unexecuted_blocks=1 00:07:37.841 00:07:37.841 ' 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.841 --rc genhtml_branch_coverage=1 00:07:37.841 --rc genhtml_function_coverage=1 00:07:37.841 --rc genhtml_legend=1 00:07:37.841 --rc geninfo_all_blocks=1 00:07:37.841 --rc geninfo_unexecuted_blocks=1 00:07:37.841 00:07:37.841 ' 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.841 --rc genhtml_branch_coverage=1 00:07:37.841 --rc genhtml_function_coverage=1 00:07:37.841 --rc genhtml_legend=1 00:07:37.841 --rc geninfo_all_blocks=1 00:07:37.841 --rc geninfo_unexecuted_blocks=1 00:07:37.841 00:07:37.841 ' 00:07:37.841 06:17:21 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.841 --rc genhtml_branch_coverage=1 00:07:37.841 --rc genhtml_function_coverage=1 00:07:37.841 --rc genhtml_legend=1 00:07:37.841 --rc geninfo_all_blocks=1 00:07:37.841 --rc geninfo_unexecuted_blocks=1 00:07:37.841 00:07:37.841 ' 00:07:37.841 06:17:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f76d946-b2f9-4be1-9537-21eaa0074f60 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6f76d946-b2f9-4be1-9537-21eaa0074f60 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:37.841 06:17:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.842 06:17:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.842 06:17:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.842 06:17:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.842 06:17:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.842 06:17:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 06:17:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 06:17:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 06:17:21 json_config -- paths/export.sh@5 -- # export PATH 00:07:37.842 06:17:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@51 -- # : 0 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.842 06:17:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:37.842 WARNING: No tests are enabled so not running JSON configuration tests 00:07:37.842 06:17:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:37.842 00:07:37.842 real 0m0.243s 00:07:37.842 user 0m0.153s 00:07:37.842 sys 0m0.092s 00:07:37.842 06:17:21 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.842 06:17:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:37.842 ************************************ 00:07:37.842 END TEST json_config 00:07:37.842 ************************************ 00:07:37.842 06:17:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:37.842 06:17:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.842 06:17:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.842 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:07:37.842 ************************************ 00:07:37.842 START TEST json_config_extra_key 00:07:37.842 ************************************ 00:07:37.842 06:17:21 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:38.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.101 --rc genhtml_branch_coverage=1 00:07:38.101 --rc genhtml_function_coverage=1 00:07:38.101 --rc genhtml_legend=1 00:07:38.101 --rc geninfo_all_blocks=1 00:07:38.101 --rc geninfo_unexecuted_blocks=1 00:07:38.101 00:07:38.101 ' 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:38.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.101 --rc genhtml_branch_coverage=1 00:07:38.101 --rc genhtml_function_coverage=1 00:07:38.101 --rc genhtml_legend=1 00:07:38.101 --rc geninfo_all_blocks=1 00:07:38.101 --rc geninfo_unexecuted_blocks=1 00:07:38.101 00:07:38.101 ' 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:38.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.101 --rc genhtml_branch_coverage=1 00:07:38.101 --rc genhtml_function_coverage=1 00:07:38.101 --rc genhtml_legend=1 00:07:38.101 --rc geninfo_all_blocks=1 00:07:38.101 --rc geninfo_unexecuted_blocks=1 00:07:38.101 00:07:38.101 ' 00:07:38.101 06:17:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:38.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.101 --rc genhtml_branch_coverage=1 00:07:38.101 --rc genhtml_function_coverage=1 00:07:38.101 --rc genhtml_legend=1 00:07:38.101 --rc geninfo_all_blocks=1 00:07:38.101 --rc geninfo_unexecuted_blocks=1 00:07:38.101 00:07:38.101 ' 00:07:38.101 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6f76d946-b2f9-4be1-9537-21eaa0074f60 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6f76d946-b2f9-4be1-9537-21eaa0074f60 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.101 06:17:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.101 06:17:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.101 06:17:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.101 06:17:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.101 06:17:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:38.101 06:17:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.101 06:17:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.102 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.102 06:17:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:38.102 INFO: launching applications... 00:07:38.102 06:17:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57914 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:38.102 Waiting for target to run... 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57914 /var/tmp/spdk_tgt.sock 00:07:38.102 06:17:22 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57914 ']' 00:07:38.102 06:17:22 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:38.102 06:17:22 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.102 06:17:22 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:38.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:38.102 06:17:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:38.102 06:17:22 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.102 06:17:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:38.361 [2024-11-26 06:17:22.285345] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:38.361 [2024-11-26 06:17:22.285598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57914 ] 00:07:38.930 [2024-11-26 06:17:22.870409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.930 [2024-11-26 06:17:22.981820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.867 06:17:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.867 06:17:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:39.867 00:07:39.867 06:17:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:39.867 INFO: shutting down applications... 00:07:39.867 06:17:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57914 ]] 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57914 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:39.867 06:17:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:40.126 06:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:40.126 06:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.126 06:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:40.126 06:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:40.695 06:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:40.695 06:17:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.695 06:17:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:40.695 06:17:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.263 06:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.263 06:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.263 06:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:41.263 06:17:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.830 06:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.830 06:17:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.830 06:17:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:41.830 06:17:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.398 06:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.398 06:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.398 06:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:42.398 06:17:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57914 00:07:42.657 SPDK target shutdown done 00:07:42.657 Success 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:42.657 06:17:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:42.657 06:17:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:42.657 00:07:42.657 real 0m4.799s 00:07:42.657 user 0m4.126s 00:07:42.657 sys 0m0.796s 00:07:42.657 06:17:26 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.657 06:17:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:42.657 ************************************ 00:07:42.657 END TEST json_config_extra_key 00:07:42.657 ************************************ 00:07:42.916 06:17:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:42.916 06:17:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.916 06:17:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.916 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.916 ************************************ 00:07:42.916 START TEST alias_rpc 00:07:42.916 ************************************ 00:07:42.916 06:17:26 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:42.916 * Looking for test storage... 00:07:42.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:42.916 06:17:26 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:42.916 06:17:26 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:42.916 06:17:26 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:42.916 06:17:27 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.916 06:17:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:42.916 06:17:27 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.916 06:17:27 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.916 --rc genhtml_branch_coverage=1 00:07:42.916 --rc genhtml_function_coverage=1 00:07:42.916 --rc genhtml_legend=1 00:07:42.916 --rc geninfo_all_blocks=1 00:07:42.916 --rc geninfo_unexecuted_blocks=1 00:07:42.916 00:07:42.916 ' 00:07:42.916 06:17:27 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.916 --rc genhtml_branch_coverage=1 00:07:42.916 --rc genhtml_function_coverage=1 00:07:42.916 --rc genhtml_legend=1 00:07:42.916 --rc geninfo_all_blocks=1 00:07:42.916 --rc geninfo_unexecuted_blocks=1 00:07:42.916 00:07:42.916 ' 00:07:42.916 06:17:27 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.916 --rc genhtml_branch_coverage=1 00:07:42.916 --rc genhtml_function_coverage=1 00:07:42.916 --rc genhtml_legend=1 00:07:42.916 --rc geninfo_all_blocks=1 00:07:42.916 --rc geninfo_unexecuted_blocks=1 00:07:42.916 00:07:42.916 ' 00:07:42.916 06:17:27 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:42.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.916 --rc genhtml_branch_coverage=1 00:07:42.916 --rc genhtml_function_coverage=1 00:07:42.916 --rc genhtml_legend=1 00:07:42.916 --rc geninfo_all_blocks=1 00:07:42.916 --rc geninfo_unexecuted_blocks=1 00:07:42.916 00:07:42.916 ' 00:07:42.916 06:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:42.916 06:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:43.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.175 06:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58030 00:07:43.175 06:17:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58030 00:07:43.175 06:17:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58030 ']' 00:07:43.175 06:17:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.175 06:17:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.175 06:17:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.175 06:17:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.175 06:17:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.175 [2024-11-26 06:17:27.154519] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:43.175 [2024-11-26 06:17:27.154742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58030 ] 00:07:43.433 [2024-11-26 06:17:27.331036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.433 [2024-11-26 06:17:27.449123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.368 06:17:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.368 06:17:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:44.368 06:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:44.626 06:17:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58030 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58030 ']' 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58030 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58030 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58030' 00:07:44.626 killing process with pid 58030 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 58030 00:07:44.626 06:17:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 58030 00:07:47.187 00:07:47.188 real 0m4.289s 00:07:47.188 user 0m4.335s 00:07:47.188 sys 0m0.585s 00:07:47.188 06:17:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.188 06:17:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.188 ************************************ 00:07:47.188 END TEST alias_rpc 00:07:47.188 ************************************ 00:07:47.188 06:17:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:47.188 06:17:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:47.188 06:17:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.188 06:17:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.188 06:17:31 -- common/autotest_common.sh@10 -- # set +x 00:07:47.188 ************************************ 00:07:47.188 START TEST spdkcli_tcp 00:07:47.188 ************************************ 00:07:47.188 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:47.188 * Looking for test storage... 00:07:47.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:47.188 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.188 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.188 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.526 06:17:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.526 --rc genhtml_branch_coverage=1 00:07:47.526 --rc genhtml_function_coverage=1 00:07:47.526 --rc genhtml_legend=1 00:07:47.526 --rc geninfo_all_blocks=1 00:07:47.526 --rc geninfo_unexecuted_blocks=1 00:07:47.526 00:07:47.526 ' 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.526 --rc genhtml_branch_coverage=1 00:07:47.526 --rc genhtml_function_coverage=1 00:07:47.526 --rc genhtml_legend=1 00:07:47.526 --rc geninfo_all_blocks=1 00:07:47.526 --rc geninfo_unexecuted_blocks=1 00:07:47.526 00:07:47.526 ' 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.526 --rc genhtml_branch_coverage=1 00:07:47.526 --rc genhtml_function_coverage=1 00:07:47.526 --rc genhtml_legend=1 00:07:47.526 --rc geninfo_all_blocks=1 00:07:47.526 --rc geninfo_unexecuted_blocks=1 00:07:47.526 00:07:47.526 ' 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.526 --rc genhtml_branch_coverage=1 00:07:47.526 --rc genhtml_function_coverage=1 00:07:47.526 --rc genhtml_legend=1 00:07:47.526 --rc geninfo_all_blocks=1 00:07:47.526 --rc geninfo_unexecuted_blocks=1 00:07:47.526 00:07:47.526 ' 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58138 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:47.526 06:17:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58138 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58138 ']' 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.526 06:17:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:47.526 [2024-11-26 06:17:31.510537] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:47.526 [2024-11-26 06:17:31.510678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58138 ] 00:07:47.786 [2024-11-26 06:17:31.691967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.786 [2024-11-26 06:17:31.839756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.786 [2024-11-26 06:17:31.839803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.163 06:17:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.163 06:17:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:49.163 06:17:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58160 00:07:49.163 06:17:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:49.163 06:17:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:49.163 [ 00:07:49.163 "bdev_malloc_delete", 00:07:49.163 "bdev_malloc_create", 00:07:49.163 "bdev_null_resize", 00:07:49.163 "bdev_null_delete", 00:07:49.163 "bdev_null_create", 00:07:49.163 "bdev_nvme_cuse_unregister", 00:07:49.163 "bdev_nvme_cuse_register", 00:07:49.163 "bdev_opal_new_user", 00:07:49.163 "bdev_opal_set_lock_state", 00:07:49.163 "bdev_opal_delete", 00:07:49.163 "bdev_opal_get_info", 00:07:49.163 "bdev_opal_create", 00:07:49.163 "bdev_nvme_opal_revert", 00:07:49.163 "bdev_nvme_opal_init", 00:07:49.163 "bdev_nvme_send_cmd", 00:07:49.163 "bdev_nvme_set_keys", 00:07:49.163 "bdev_nvme_get_path_iostat", 00:07:49.163 "bdev_nvme_get_mdns_discovery_info", 00:07:49.163 "bdev_nvme_stop_mdns_discovery", 00:07:49.163 "bdev_nvme_start_mdns_discovery", 00:07:49.163 "bdev_nvme_set_multipath_policy", 00:07:49.163 "bdev_nvme_set_preferred_path", 00:07:49.163 "bdev_nvme_get_io_paths", 00:07:49.163 "bdev_nvme_remove_error_injection", 00:07:49.163 "bdev_nvme_add_error_injection", 00:07:49.163 "bdev_nvme_get_discovery_info", 00:07:49.163 "bdev_nvme_stop_discovery", 00:07:49.163 "bdev_nvme_start_discovery", 00:07:49.163 "bdev_nvme_get_controller_health_info", 00:07:49.163 "bdev_nvme_disable_controller", 00:07:49.163 "bdev_nvme_enable_controller", 00:07:49.163 "bdev_nvme_reset_controller", 00:07:49.163 "bdev_nvme_get_transport_statistics", 00:07:49.163 "bdev_nvme_apply_firmware", 00:07:49.163 "bdev_nvme_detach_controller", 00:07:49.163 "bdev_nvme_get_controllers", 00:07:49.163 "bdev_nvme_attach_controller", 00:07:49.163 "bdev_nvme_set_hotplug", 00:07:49.163 "bdev_nvme_set_options", 00:07:49.163 "bdev_passthru_delete", 00:07:49.163 "bdev_passthru_create", 00:07:49.163 "bdev_lvol_set_parent_bdev", 00:07:49.163 "bdev_lvol_set_parent", 00:07:49.163 "bdev_lvol_check_shallow_copy", 00:07:49.163 "bdev_lvol_start_shallow_copy", 00:07:49.163 "bdev_lvol_grow_lvstore", 00:07:49.163 "bdev_lvol_get_lvols", 00:07:49.164 "bdev_lvol_get_lvstores", 00:07:49.164 "bdev_lvol_delete", 00:07:49.164 "bdev_lvol_set_read_only", 00:07:49.164 "bdev_lvol_resize", 00:07:49.164 "bdev_lvol_decouple_parent", 00:07:49.164 "bdev_lvol_inflate", 00:07:49.164 "bdev_lvol_rename", 00:07:49.164 "bdev_lvol_clone_bdev", 00:07:49.164 "bdev_lvol_clone", 00:07:49.164 "bdev_lvol_snapshot", 00:07:49.164 "bdev_lvol_create", 00:07:49.164 "bdev_lvol_delete_lvstore", 00:07:49.164 "bdev_lvol_rename_lvstore", 00:07:49.164 "bdev_lvol_create_lvstore", 00:07:49.164 "bdev_raid_set_options", 00:07:49.164 "bdev_raid_remove_base_bdev", 00:07:49.164 "bdev_raid_add_base_bdev", 00:07:49.164 "bdev_raid_delete", 00:07:49.164 "bdev_raid_create", 00:07:49.164 "bdev_raid_get_bdevs", 00:07:49.164 "bdev_error_inject_error", 00:07:49.164 "bdev_error_delete", 00:07:49.164 "bdev_error_create", 00:07:49.164 "bdev_split_delete", 00:07:49.164 "bdev_split_create", 00:07:49.164 "bdev_delay_delete", 00:07:49.164 "bdev_delay_create", 00:07:49.164 "bdev_delay_update_latency", 00:07:49.164 "bdev_zone_block_delete", 00:07:49.164 "bdev_zone_block_create", 00:07:49.164 "blobfs_create", 00:07:49.164 "blobfs_detect", 00:07:49.164 "blobfs_set_cache_size", 00:07:49.164 "bdev_aio_delete", 00:07:49.164 "bdev_aio_rescan", 00:07:49.164 "bdev_aio_create", 00:07:49.164 "bdev_ftl_set_property", 00:07:49.164 "bdev_ftl_get_properties", 00:07:49.164 "bdev_ftl_get_stats", 00:07:49.164 "bdev_ftl_unmap", 00:07:49.164 "bdev_ftl_unload", 00:07:49.164 "bdev_ftl_delete", 00:07:49.164 "bdev_ftl_load", 00:07:49.164 "bdev_ftl_create", 00:07:49.164 "bdev_virtio_attach_controller", 00:07:49.164 "bdev_virtio_scsi_get_devices", 00:07:49.164 "bdev_virtio_detach_controller", 00:07:49.164 "bdev_virtio_blk_set_hotplug", 00:07:49.164 "bdev_iscsi_delete", 00:07:49.164 "bdev_iscsi_create", 00:07:49.164 "bdev_iscsi_set_options", 00:07:49.164 "accel_error_inject_error", 00:07:49.164 "ioat_scan_accel_module", 00:07:49.164 "dsa_scan_accel_module", 00:07:49.164 "iaa_scan_accel_module", 00:07:49.164 "keyring_file_remove_key", 00:07:49.164 "keyring_file_add_key", 00:07:49.164 "keyring_linux_set_options", 00:07:49.164 "fsdev_aio_delete", 00:07:49.164 "fsdev_aio_create", 00:07:49.164 "iscsi_get_histogram", 00:07:49.164 "iscsi_enable_histogram", 00:07:49.164 "iscsi_set_options", 00:07:49.164 "iscsi_get_auth_groups", 00:07:49.164 "iscsi_auth_group_remove_secret", 00:07:49.164 "iscsi_auth_group_add_secret", 00:07:49.164 "iscsi_delete_auth_group", 00:07:49.164 "iscsi_create_auth_group", 00:07:49.164 "iscsi_set_discovery_auth", 00:07:49.164 "iscsi_get_options", 00:07:49.164 "iscsi_target_node_request_logout", 00:07:49.164 "iscsi_target_node_set_redirect", 00:07:49.164 "iscsi_target_node_set_auth", 00:07:49.164 "iscsi_target_node_add_lun", 00:07:49.164 "iscsi_get_stats", 00:07:49.164 "iscsi_get_connections", 00:07:49.164 "iscsi_portal_group_set_auth", 00:07:49.164 "iscsi_start_portal_group", 00:07:49.164 "iscsi_delete_portal_group", 00:07:49.164 "iscsi_create_portal_group", 00:07:49.164 "iscsi_get_portal_groups", 00:07:49.164 "iscsi_delete_target_node", 00:07:49.164 "iscsi_target_node_remove_pg_ig_maps", 00:07:49.164 "iscsi_target_node_add_pg_ig_maps", 00:07:49.164 "iscsi_create_target_node", 00:07:49.164 "iscsi_get_target_nodes", 00:07:49.164 "iscsi_delete_initiator_group", 00:07:49.164 "iscsi_initiator_group_remove_initiators", 00:07:49.164 "iscsi_initiator_group_add_initiators", 00:07:49.164 "iscsi_create_initiator_group", 00:07:49.164 "iscsi_get_initiator_groups", 00:07:49.164 "nvmf_set_crdt", 00:07:49.164 "nvmf_set_config", 00:07:49.164 "nvmf_set_max_subsystems", 00:07:49.164 "nvmf_stop_mdns_prr", 00:07:49.164 "nvmf_publish_mdns_prr", 00:07:49.164 "nvmf_subsystem_get_listeners", 00:07:49.164 "nvmf_subsystem_get_qpairs", 00:07:49.164 "nvmf_subsystem_get_controllers", 00:07:49.164 "nvmf_get_stats", 00:07:49.164 "nvmf_get_transports", 00:07:49.164 "nvmf_create_transport", 00:07:49.164 "nvmf_get_targets", 00:07:49.164 "nvmf_delete_target", 00:07:49.164 "nvmf_create_target", 00:07:49.164 "nvmf_subsystem_allow_any_host", 00:07:49.164 "nvmf_subsystem_set_keys", 00:07:49.164 "nvmf_subsystem_remove_host", 00:07:49.164 "nvmf_subsystem_add_host", 00:07:49.164 "nvmf_ns_remove_host", 00:07:49.164 "nvmf_ns_add_host", 00:07:49.164 "nvmf_subsystem_remove_ns", 00:07:49.164 "nvmf_subsystem_set_ns_ana_group", 00:07:49.164 "nvmf_subsystem_add_ns", 00:07:49.164 "nvmf_subsystem_listener_set_ana_state", 00:07:49.164 "nvmf_discovery_get_referrals", 00:07:49.164 "nvmf_discovery_remove_referral", 00:07:49.164 "nvmf_discovery_add_referral", 00:07:49.164 "nvmf_subsystem_remove_listener", 00:07:49.164 "nvmf_subsystem_add_listener", 00:07:49.164 "nvmf_delete_subsystem", 00:07:49.164 "nvmf_create_subsystem", 00:07:49.164 "nvmf_get_subsystems", 00:07:49.164 "env_dpdk_get_mem_stats", 00:07:49.164 "nbd_get_disks", 00:07:49.164 "nbd_stop_disk", 00:07:49.164 "nbd_start_disk", 00:07:49.164 "ublk_recover_disk", 00:07:49.164 "ublk_get_disks", 00:07:49.164 "ublk_stop_disk", 00:07:49.164 "ublk_start_disk", 00:07:49.164 "ublk_destroy_target", 00:07:49.164 "ublk_create_target", 00:07:49.164 "virtio_blk_create_transport", 00:07:49.164 "virtio_blk_get_transports", 00:07:49.164 "vhost_controller_set_coalescing", 00:07:49.164 "vhost_get_controllers", 00:07:49.164 "vhost_delete_controller", 00:07:49.164 "vhost_create_blk_controller", 00:07:49.164 "vhost_scsi_controller_remove_target", 00:07:49.164 "vhost_scsi_controller_add_target", 00:07:49.164 "vhost_start_scsi_controller", 00:07:49.164 "vhost_create_scsi_controller", 00:07:49.164 "thread_set_cpumask", 00:07:49.164 "scheduler_set_options", 00:07:49.164 "framework_get_governor", 00:07:49.164 "framework_get_scheduler", 00:07:49.164 "framework_set_scheduler", 00:07:49.164 "framework_get_reactors", 00:07:49.164 "thread_get_io_channels", 00:07:49.164 "thread_get_pollers", 00:07:49.164 "thread_get_stats", 00:07:49.164 "framework_monitor_context_switch", 00:07:49.164 "spdk_kill_instance", 00:07:49.164 "log_enable_timestamps", 00:07:49.164 "log_get_flags", 00:07:49.164 "log_clear_flag", 00:07:49.164 "log_set_flag", 00:07:49.164 "log_get_level", 00:07:49.164 "log_set_level", 00:07:49.164 "log_get_print_level", 00:07:49.164 "log_set_print_level", 00:07:49.164 "framework_enable_cpumask_locks", 00:07:49.164 "framework_disable_cpumask_locks", 00:07:49.164 "framework_wait_init", 00:07:49.164 "framework_start_init", 00:07:49.164 "scsi_get_devices", 00:07:49.164 "bdev_get_histogram", 00:07:49.164 "bdev_enable_histogram", 00:07:49.164 "bdev_set_qos_limit", 00:07:49.164 "bdev_set_qd_sampling_period", 00:07:49.164 "bdev_get_bdevs", 00:07:49.164 "bdev_reset_iostat", 00:07:49.164 "bdev_get_iostat", 00:07:49.164 "bdev_examine", 00:07:49.164 "bdev_wait_for_examine", 00:07:49.164 "bdev_set_options", 00:07:49.164 "accel_get_stats", 00:07:49.164 "accel_set_options", 00:07:49.164 "accel_set_driver", 00:07:49.164 "accel_crypto_key_destroy", 00:07:49.164 "accel_crypto_keys_get", 00:07:49.164 "accel_crypto_key_create", 00:07:49.164 "accel_assign_opc", 00:07:49.164 "accel_get_module_info", 00:07:49.164 "accel_get_opc_assignments", 00:07:49.164 "vmd_rescan", 00:07:49.164 "vmd_remove_device", 00:07:49.164 "vmd_enable", 00:07:49.164 "sock_get_default_impl", 00:07:49.164 "sock_set_default_impl", 00:07:49.164 "sock_impl_set_options", 00:07:49.164 "sock_impl_get_options", 00:07:49.164 "iobuf_get_stats", 00:07:49.164 "iobuf_set_options", 00:07:49.164 "keyring_get_keys", 00:07:49.164 "framework_get_pci_devices", 00:07:49.164 "framework_get_config", 00:07:49.164 "framework_get_subsystems", 00:07:49.164 "fsdev_set_opts", 00:07:49.164 "fsdev_get_opts", 00:07:49.164 "trace_get_info", 00:07:49.164 "trace_get_tpoint_group_mask", 00:07:49.164 "trace_disable_tpoint_group", 00:07:49.164 "trace_enable_tpoint_group", 00:07:49.164 "trace_clear_tpoint_mask", 00:07:49.164 "trace_set_tpoint_mask", 00:07:49.164 "notify_get_notifications", 00:07:49.164 "notify_get_types", 00:07:49.164 "spdk_get_version", 00:07:49.164 "rpc_get_methods" 00:07:49.164 ] 00:07:49.164 06:17:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.164 06:17:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:49.164 06:17:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58138 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58138 ']' 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58138 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58138 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58138' 00:07:49.164 killing process with pid 58138 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58138 00:07:49.164 06:17:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58138 00:07:51.696 ************************************ 00:07:51.696 END TEST spdkcli_tcp 00:07:51.696 ************************************ 00:07:51.696 00:07:51.696 real 0m4.557s 00:07:51.696 user 0m7.974s 00:07:51.696 sys 0m0.815s 00:07:51.696 06:17:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.696 06:17:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.696 06:17:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.696 06:17:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.696 06:17:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.696 06:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:51.696 ************************************ 00:07:51.696 START TEST dpdk_mem_utility 00:07:51.696 ************************************ 00:07:51.696 06:17:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:51.955 * Looking for test storage... 00:07:51.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:51.955 06:17:35 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.955 06:17:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.955 06:17:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.955 06:17:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.955 06:17:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.955 06:17:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.956 --rc genhtml_branch_coverage=1 00:07:51.956 --rc genhtml_function_coverage=1 00:07:51.956 --rc genhtml_legend=1 00:07:51.956 --rc geninfo_all_blocks=1 00:07:51.956 --rc geninfo_unexecuted_blocks=1 00:07:51.956 00:07:51.956 ' 00:07:51.956 06:17:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:51.956 06:17:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:51.956 06:17:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58265 00:07:51.956 06:17:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58265 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58265 ']' 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.956 06:17:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:52.214 [2024-11-26 06:17:36.110646] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:52.214 [2024-11-26 06:17:36.110785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58265 ] 00:07:52.214 [2024-11-26 06:17:36.289564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.472 [2024-11-26 06:17:36.413075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.413 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.413 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:53.413 06:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:53.413 06:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:53.413 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.413 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.413 { 00:07:53.413 "filename": "/tmp/spdk_mem_dump.txt" 00:07:53.413 } 00:07:53.413 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.413 06:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:53.413 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:53.413 1 heaps totaling size 816.000000 MiB 00:07:53.413 size: 816.000000 MiB heap id: 0 00:07:53.413 end heaps---------- 00:07:53.413 9 mempools totaling size 595.772034 MiB 00:07:53.413 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:53.413 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:53.413 size: 92.545471 MiB name: bdev_io_58265 00:07:53.413 size: 50.003479 MiB name: msgpool_58265 00:07:53.413 size: 36.509338 MiB name: fsdev_io_58265 00:07:53.413 size: 21.763794 MiB name: PDU_Pool 00:07:53.413 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:53.413 size: 4.133484 MiB name: evtpool_58265 00:07:53.413 size: 0.026123 MiB name: Session_Pool 00:07:53.413 end mempools------- 00:07:53.413 6 memzones totaling size 4.142822 MiB 00:07:53.413 size: 1.000366 MiB name: RG_ring_0_58265 00:07:53.413 size: 1.000366 MiB name: RG_ring_1_58265 00:07:53.413 size: 1.000366 MiB name: RG_ring_4_58265 00:07:53.413 size: 1.000366 MiB name: RG_ring_5_58265 00:07:53.413 size: 0.125366 MiB name: RG_ring_2_58265 00:07:53.413 size: 0.015991 MiB name: RG_ring_3_58265 00:07:53.413 end memzones------- 00:07:53.413 06:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:53.413 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:07:53.413 list of free elements. size: 16.790649 MiB 00:07:53.413 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:53.413 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:53.413 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:53.413 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:53.413 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:53.413 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:53.413 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:53.413 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:53.413 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:53.413 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:53.413 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:53.413 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:07:53.413 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:53.413 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:53.413 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:53.413 element at address: 0x200012c00000 with size: 0.443481 MiB 00:07:53.413 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:53.413 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:53.413 list of standard malloc elements. size: 199.288452 MiB 00:07:53.413 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:53.413 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:53.413 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:53.413 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:53.413 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:53.413 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:53.413 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:53.413 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:53.413 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:53.413 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:53.413 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:53.413 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:53.413 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:53.413 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:53.413 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:53.413 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:53.414 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:53.414 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:53.415 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:53.415 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:53.415 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:53.415 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:53.416 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:53.417 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:53.417 list of memzone associated elements. size: 599.920898 MiB 00:07:53.417 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:53.417 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:53.417 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:53.417 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:53.417 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:53.417 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58265_0 00:07:53.417 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:53.417 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58265_0 00:07:53.417 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:53.417 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58265_0 00:07:53.417 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:53.417 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:53.417 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:53.417 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:53.417 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:53.417 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58265_0 00:07:53.417 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:53.417 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58265 00:07:53.417 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:53.417 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58265 00:07:53.417 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:53.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:53.417 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:53.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:53.417 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:53.417 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:53.417 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:53.417 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:53.417 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:53.417 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58265 00:07:53.417 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:53.417 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58265 00:07:53.417 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:53.417 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58265 00:07:53.417 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:53.417 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58265 00:07:53.417 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:53.417 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58265 00:07:53.417 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:53.417 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58265 00:07:53.418 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:53.418 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:53.418 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:53.418 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:53.418 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:53.418 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:53.418 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:53.418 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58265 00:07:53.418 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:53.418 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58265 00:07:53.418 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:53.418 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:53.418 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:53.418 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:53.418 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:53.418 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58265 00:07:53.418 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:53.418 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:53.418 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:53.418 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58265 00:07:53.418 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:53.418 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58265 00:07:53.418 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:53.418 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58265 00:07:53.418 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:53.418 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:53.418 06:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:53.418 06:17:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58265 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58265 ']' 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58265 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58265 00:07:53.418 killing process with pid 58265 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58265' 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58265 00:07:53.418 06:17:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58265 00:07:55.955 00:07:55.955 real 0m4.174s 00:07:55.955 user 0m4.120s 00:07:55.955 sys 0m0.595s 00:07:55.955 06:17:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.955 ************************************ 00:07:55.955 END TEST dpdk_mem_utility 00:07:55.955 ************************************ 00:07:55.955 06:17:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:55.955 06:17:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:55.955 06:17:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.955 06:17:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.955 06:17:40 -- common/autotest_common.sh@10 -- # set +x 00:07:55.955 ************************************ 00:07:55.955 START TEST event 00:07:55.955 ************************************ 00:07:55.955 06:17:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:56.214 * Looking for test storage... 00:07:56.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:56.214 06:17:40 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:56.214 06:17:40 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:56.214 06:17:40 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:56.214 06:17:40 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:56.214 06:17:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.215 06:17:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.215 06:17:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.215 06:17:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.215 06:17:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.215 06:17:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.215 06:17:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.215 06:17:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.215 06:17:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.215 06:17:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.215 06:17:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.215 06:17:40 event -- scripts/common.sh@344 -- # case "$op" in 00:07:56.215 06:17:40 event -- scripts/common.sh@345 -- # : 1 00:07:56.215 06:17:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.215 06:17:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.215 06:17:40 event -- scripts/common.sh@365 -- # decimal 1 00:07:56.215 06:17:40 event -- scripts/common.sh@353 -- # local d=1 00:07:56.215 06:17:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.215 06:17:40 event -- scripts/common.sh@355 -- # echo 1 00:07:56.215 06:17:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.215 06:17:40 event -- scripts/common.sh@366 -- # decimal 2 00:07:56.215 06:17:40 event -- scripts/common.sh@353 -- # local d=2 00:07:56.215 06:17:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.215 06:17:40 event -- scripts/common.sh@355 -- # echo 2 00:07:56.215 06:17:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.215 06:17:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.215 06:17:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.215 06:17:40 event -- scripts/common.sh@368 -- # return 0 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:56.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.215 --rc genhtml_branch_coverage=1 00:07:56.215 --rc genhtml_function_coverage=1 00:07:56.215 --rc genhtml_legend=1 00:07:56.215 --rc geninfo_all_blocks=1 00:07:56.215 --rc geninfo_unexecuted_blocks=1 00:07:56.215 00:07:56.215 ' 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:56.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.215 --rc genhtml_branch_coverage=1 00:07:56.215 --rc genhtml_function_coverage=1 00:07:56.215 --rc genhtml_legend=1 00:07:56.215 --rc geninfo_all_blocks=1 00:07:56.215 --rc geninfo_unexecuted_blocks=1 00:07:56.215 00:07:56.215 ' 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:56.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.215 --rc genhtml_branch_coverage=1 00:07:56.215 --rc genhtml_function_coverage=1 00:07:56.215 --rc genhtml_legend=1 00:07:56.215 --rc geninfo_all_blocks=1 00:07:56.215 --rc geninfo_unexecuted_blocks=1 00:07:56.215 00:07:56.215 ' 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:56.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.215 --rc genhtml_branch_coverage=1 00:07:56.215 --rc genhtml_function_coverage=1 00:07:56.215 --rc genhtml_legend=1 00:07:56.215 --rc geninfo_all_blocks=1 00:07:56.215 --rc geninfo_unexecuted_blocks=1 00:07:56.215 00:07:56.215 ' 00:07:56.215 06:17:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:56.215 06:17:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:56.215 06:17:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:56.215 06:17:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.215 06:17:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 ************************************ 00:07:56.215 START TEST event_perf 00:07:56.215 ************************************ 00:07:56.215 06:17:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:56.215 Running I/O for 1 seconds...[2024-11-26 06:17:40.325583] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:56.215 [2024-11-26 06:17:40.325769] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58373 ] 00:07:56.475 [2024-11-26 06:17:40.510518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.734 [2024-11-26 06:17:40.640954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.734 [2024-11-26 06:17:40.641171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.734 [2024-11-26 06:17:40.641345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.734 [2024-11-26 06:17:40.641385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.114 Running I/O for 1 seconds... 00:07:58.114 lcore 0: 95984 00:07:58.114 lcore 1: 95980 00:07:58.114 lcore 2: 95983 00:07:58.114 lcore 3: 95981 00:07:58.114 done. 00:07:58.114 00:07:58.114 real 0m1.619s 00:07:58.114 user 0m4.364s 00:07:58.114 sys 0m0.127s 00:07:58.114 06:17:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.114 06:17:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.114 ************************************ 00:07:58.114 END TEST event_perf 00:07:58.114 ************************************ 00:07:58.114 06:17:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:58.114 06:17:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:58.114 06:17:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.114 06:17:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.114 ************************************ 00:07:58.114 START TEST event_reactor 00:07:58.114 ************************************ 00:07:58.114 06:17:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:58.114 [2024-11-26 06:17:42.016845] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:58.114 [2024-11-26 06:17:42.017001] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58413 ] 00:07:58.114 [2024-11-26 06:17:42.197540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.371 [2024-11-26 06:17:42.319373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.754 test_start 00:07:59.754 oneshot 00:07:59.754 tick 100 00:07:59.754 tick 100 00:07:59.754 tick 250 00:07:59.754 tick 100 00:07:59.754 tick 100 00:07:59.754 tick 100 00:07:59.754 tick 250 00:07:59.754 tick 500 00:07:59.754 tick 100 00:07:59.754 tick 100 00:07:59.754 tick 250 00:07:59.754 tick 100 00:07:59.754 tick 100 00:07:59.754 test_end 00:07:59.754 00:07:59.754 real 0m1.616s 00:07:59.754 user 0m1.390s 00:07:59.754 sys 0m0.113s 00:07:59.754 ************************************ 00:07:59.754 END TEST event_reactor 00:07:59.754 ************************************ 00:07:59.754 06:17:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.754 06:17:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:59.754 06:17:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:59.754 06:17:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:59.754 06:17:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.754 06:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:59.754 ************************************ 00:07:59.754 START TEST event_reactor_perf 00:07:59.754 ************************************ 00:07:59.754 06:17:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:59.754 [2024-11-26 06:17:43.687772] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:07:59.754 [2024-11-26 06:17:43.687985] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58455 ] 00:07:59.754 [2024-11-26 06:17:43.869383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.012 [2024-11-26 06:17:43.991573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.388 test_start 00:08:01.388 test_end 00:08:01.388 Performance: 347832 events per second 00:08:01.388 00:08:01.388 real 0m1.582s 00:08:01.388 user 0m1.368s 00:08:01.388 sys 0m0.105s 00:08:01.388 ************************************ 00:08:01.388 END TEST event_reactor_perf 00:08:01.388 ************************************ 00:08:01.388 06:17:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.388 06:17:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.388 06:17:45 event -- event/event.sh@49 -- # uname -s 00:08:01.388 06:17:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:01.388 06:17:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:01.388 06:17:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.388 06:17:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.388 06:17:45 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.388 ************************************ 00:08:01.388 START TEST event_scheduler 00:08:01.388 ************************************ 00:08:01.388 06:17:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:01.388 * Looking for test storage... 00:08:01.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:01.388 06:17:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.388 06:17:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.388 06:17:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.646 06:17:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:01.646 06:17:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.647 06:17:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.647 --rc genhtml_branch_coverage=1 00:08:01.647 --rc genhtml_function_coverage=1 00:08:01.647 --rc genhtml_legend=1 00:08:01.647 --rc geninfo_all_blocks=1 00:08:01.647 --rc geninfo_unexecuted_blocks=1 00:08:01.647 00:08:01.647 ' 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.647 --rc genhtml_branch_coverage=1 00:08:01.647 --rc genhtml_function_coverage=1 00:08:01.647 --rc genhtml_legend=1 00:08:01.647 --rc geninfo_all_blocks=1 00:08:01.647 --rc geninfo_unexecuted_blocks=1 00:08:01.647 00:08:01.647 ' 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.647 --rc genhtml_branch_coverage=1 00:08:01.647 --rc genhtml_function_coverage=1 00:08:01.647 --rc genhtml_legend=1 00:08:01.647 --rc geninfo_all_blocks=1 00:08:01.647 --rc geninfo_unexecuted_blocks=1 00:08:01.647 00:08:01.647 ' 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.647 --rc genhtml_branch_coverage=1 00:08:01.647 --rc genhtml_function_coverage=1 00:08:01.647 --rc genhtml_legend=1 00:08:01.647 --rc geninfo_all_blocks=1 00:08:01.647 --rc geninfo_unexecuted_blocks=1 00:08:01.647 00:08:01.647 ' 00:08:01.647 06:17:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:01.647 06:17:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58525 00:08:01.647 06:17:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:01.647 06:17:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.647 06:17:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58525 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58525 ']' 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.647 06:17:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:01.647 [2024-11-26 06:17:45.641985] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:01.647 [2024-11-26 06:17:45.642152] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58525 ] 00:08:01.906 [2024-11-26 06:17:45.825477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.906 [2024-11-26 06:17:45.956220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.906 [2024-11-26 06:17:45.956350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.906 [2024-11-26 06:17:45.956528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.906 [2024-11-26 06:17:45.956572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:02.473 06:17:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:02.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:02.473 POWER: Cannot set governor of lcore 0 to userspace 00:08:02.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:02.473 POWER: Cannot set governor of lcore 0 to performance 00:08:02.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:02.473 POWER: Cannot set governor of lcore 0 to userspace 00:08:02.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:02.473 POWER: Cannot set governor of lcore 0 to userspace 00:08:02.473 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:02.473 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:02.473 POWER: Unable to set Power Management Environment for lcore 0 00:08:02.473 [2024-11-26 06:17:46.521342] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:02.473 [2024-11-26 06:17:46.521364] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:02.473 [2024-11-26 06:17:46.521375] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:02.473 [2024-11-26 06:17:46.521394] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:02.473 [2024-11-26 06:17:46.521403] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:02.473 [2024-11-26 06:17:46.521413] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.473 06:17:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.473 06:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 [2024-11-26 06:17:46.869906] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:03.040 06:17:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:03.040 06:17:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.040 06:17:46 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 ************************************ 00:08:03.040 START TEST scheduler_create_thread 00:08:03.040 ************************************ 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 2 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 3 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 4 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 5 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 6 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 7 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.040 8 00:08:03.040 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 9 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.041 06:17:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:03.041 10 00:08:03.041 06:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.041 06:17:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:03.041 06:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.041 06:17:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.416 06:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.416 06:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:04.416 06:17:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:04.416 06:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.416 06:17:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.351 06:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.351 06:17:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:05.351 06:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.351 06:17:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:05.917 06:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.917 06:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:05.917 06:17:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:05.917 06:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.917 06:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.852 ************************************ 00:08:06.852 END TEST scheduler_create_thread 00:08:06.852 ************************************ 00:08:06.852 06:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.852 00:08:06.852 real 0m3.887s 00:08:06.852 user 0m0.029s 00:08:06.852 sys 0m0.008s 00:08:06.852 06:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.852 06:17:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.852 06:17:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:06.852 06:17:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58525 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58525 ']' 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58525 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58525 00:08:06.852 killing process with pid 58525 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58525' 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58525 00:08:06.852 06:17:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58525 00:08:07.112 [2024-11-26 06:17:51.152415] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:08.491 00:08:08.491 real 0m7.129s 00:08:08.491 user 0m14.747s 00:08:08.491 sys 0m0.556s 00:08:08.491 06:17:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.491 06:17:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:08.491 ************************************ 00:08:08.491 END TEST event_scheduler 00:08:08.491 ************************************ 00:08:08.491 06:17:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:08.491 06:17:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:08.491 06:17:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.491 06:17:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.491 06:17:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.491 ************************************ 00:08:08.491 START TEST app_repeat 00:08:08.491 ************************************ 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58648 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.491 Process app_repeat pid: 58648 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58648' 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:08.491 spdk_app_start Round 0 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:08.491 06:17:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58648 /var/tmp/spdk-nbd.sock 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58648 ']' 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.491 06:17:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.491 [2024-11-26 06:17:52.590920] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:08.491 [2024-11-26 06:17:52.591086] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58648 ] 00:08:08.751 [2024-11-26 06:17:52.773125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.011 [2024-11-26 06:17:52.940695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.011 [2024-11-26 06:17:52.940735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.579 06:17:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.579 06:17:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:09.579 06:17:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:09.838 Malloc0 00:08:09.838 06:17:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:10.096 Malloc1 00:08:10.096 06:17:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.096 06:17:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.097 06:17:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:10.356 /dev/nbd0 00:08:10.356 06:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:10.356 06:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:10.356 1+0 records in 00:08:10.356 1+0 records out 00:08:10.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441954 s, 9.3 MB/s 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.356 06:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:10.356 06:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.356 06:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.356 06:17:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:10.615 /dev/nbd1 00:08:10.615 06:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:10.615 06:17:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:10.615 1+0 records in 00:08:10.615 1+0 records out 00:08:10.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443682 s, 9.2 MB/s 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:10.615 06:17:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:10.874 06:17:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.874 06:17:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.874 { 00:08:10.874 "nbd_device": "/dev/nbd0", 00:08:10.874 "bdev_name": "Malloc0" 00:08:10.874 }, 00:08:10.874 { 00:08:10.874 "nbd_device": "/dev/nbd1", 00:08:10.874 "bdev_name": "Malloc1" 00:08:10.874 } 00:08:10.874 ]' 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.874 { 00:08:10.874 "nbd_device": "/dev/nbd0", 00:08:10.874 "bdev_name": "Malloc0" 00:08:10.874 }, 00:08:10.874 { 00:08:10.874 "nbd_device": "/dev/nbd1", 00:08:10.874 "bdev_name": "Malloc1" 00:08:10.874 } 00:08:10.874 ]' 00:08:10.874 06:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.133 06:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:11.133 /dev/nbd1' 00:08:11.133 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:11.133 /dev/nbd1' 00:08:11.133 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.133 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:11.133 06:17:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:11.134 256+0 records in 00:08:11.134 256+0 records out 00:08:11.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131183 s, 79.9 MB/s 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:11.134 256+0 records in 00:08:11.134 256+0 records out 00:08:11.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182709 s, 57.4 MB/s 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:11.134 256+0 records in 00:08:11.134 256+0 records out 00:08:11.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263691 s, 39.8 MB/s 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.134 06:17:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.393 06:17:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.653 06:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:11.914 06:17:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:11.914 06:17:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:12.482 06:17:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:13.862 [2024-11-26 06:17:57.632217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.862 [2024-11-26 06:17:57.756010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.862 [2024-11-26 06:17:57.756013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.862 [2024-11-26 06:17:57.962397] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:13.862 [2024-11-26 06:17:57.962467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:15.776 spdk_app_start Round 1 00:08:15.776 06:17:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:15.776 06:17:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:15.776 06:17:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58648 /var/tmp/spdk-nbd.sock 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58648 ']' 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.776 06:17:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:15.776 06:17:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:16.036 Malloc0 00:08:16.036 06:17:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:16.296 Malloc1 00:08:16.296 06:18:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.296 06:18:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:16.556 /dev/nbd0 00:08:16.557 06:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:16.557 06:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.557 1+0 records in 00:08:16.557 1+0 records out 00:08:16.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450011 s, 9.1 MB/s 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.557 06:18:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:16.557 06:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.557 06:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.557 06:18:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:16.817 /dev/nbd1 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:16.817 1+0 records in 00:08:16.817 1+0 records out 00:08:16.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474454 s, 8.6 MB/s 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.817 06:18:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.817 06:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:17.077 { 00:08:17.077 "nbd_device": "/dev/nbd0", 00:08:17.077 "bdev_name": "Malloc0" 00:08:17.077 }, 00:08:17.077 { 00:08:17.077 "nbd_device": "/dev/nbd1", 00:08:17.077 "bdev_name": "Malloc1" 00:08:17.077 } 00:08:17.077 ]' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:17.077 { 00:08:17.077 "nbd_device": "/dev/nbd0", 00:08:17.077 "bdev_name": "Malloc0" 00:08:17.077 }, 00:08:17.077 { 00:08:17.077 "nbd_device": "/dev/nbd1", 00:08:17.077 "bdev_name": "Malloc1" 00:08:17.077 } 00:08:17.077 ]' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:17.077 /dev/nbd1' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:17.077 /dev/nbd1' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:17.077 256+0 records in 00:08:17.077 256+0 records out 00:08:17.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128673 s, 81.5 MB/s 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:17.077 256+0 records in 00:08:17.077 256+0 records out 00:08:17.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243338 s, 43.1 MB/s 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:17.077 256+0 records in 00:08:17.077 256+0 records out 00:08:17.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030036 s, 34.9 MB/s 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.077 06:18:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.336 06:18:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.595 06:18:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:17.855 06:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:18.114 06:18:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:18.114 06:18:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:18.428 06:18:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:19.808 [2024-11-26 06:18:03.884163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.067 [2024-11-26 06:18:04.028517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.067 [2024-11-26 06:18:04.028537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.327 [2024-11-26 06:18:04.276885] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:20.327 [2024-11-26 06:18:04.277017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:21.709 spdk_app_start Round 2 00:08:21.709 06:18:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:21.709 06:18:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:21.709 06:18:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58648 /var/tmp/spdk-nbd.sock 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58648 ']' 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.709 06:18:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:21.709 06:18:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.280 Malloc0 00:08:22.280 06:18:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.539 Malloc1 00:08:22.539 06:18:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.539 06:18:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:22.797 /dev/nbd0 00:08:22.797 06:18:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:22.797 06:18:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:22.797 1+0 records in 00:08:22.797 1+0 records out 00:08:22.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587617 s, 7.0 MB/s 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:22.797 06:18:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:22.797 06:18:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.797 06:18:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.797 06:18:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:23.057 /dev/nbd1 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.057 1+0 records in 00:08:23.057 1+0 records out 00:08:23.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498126 s, 8.2 MB/s 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:23.057 06:18:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.057 06:18:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.316 06:18:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.316 { 00:08:23.316 "nbd_device": "/dev/nbd0", 00:08:23.316 "bdev_name": "Malloc0" 00:08:23.316 }, 00:08:23.316 { 00:08:23.316 "nbd_device": "/dev/nbd1", 00:08:23.316 "bdev_name": "Malloc1" 00:08:23.316 } 00:08:23.316 ]' 00:08:23.316 06:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.316 { 00:08:23.316 "nbd_device": "/dev/nbd0", 00:08:23.316 "bdev_name": "Malloc0" 00:08:23.316 }, 00:08:23.316 { 00:08:23.316 "nbd_device": "/dev/nbd1", 00:08:23.316 "bdev_name": "Malloc1" 00:08:23.316 } 00:08:23.316 ]' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.317 /dev/nbd1' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.317 /dev/nbd1' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:23.317 256+0 records in 00:08:23.317 256+0 records out 00:08:23.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119677 s, 87.6 MB/s 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:23.317 256+0 records in 00:08:23.317 256+0 records out 00:08:23.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233591 s, 44.9 MB/s 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:23.317 256+0 records in 00:08:23.317 256+0 records out 00:08:23.317 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259653 s, 40.4 MB/s 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.317 06:18:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.575 06:18:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.835 06:18:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.094 06:18:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.094 06:18:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.094 06:18:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.094 06:18:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.352 06:18:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.352 06:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.352 06:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.353 06:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:24.353 06:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.353 06:18:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.353 06:18:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:24.353 06:18:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:24.353 06:18:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:24.353 06:18:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:24.612 06:18:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:25.990 [2024-11-26 06:18:09.957527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.990 [2024-11-26 06:18:10.083301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.990 [2024-11-26 06:18:10.083303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.248 [2024-11-26 06:18:10.296230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:26.248 [2024-11-26 06:18:10.296313] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:27.622 06:18:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58648 /var/tmp/spdk-nbd.sock 00:08:27.622 06:18:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58648 ']' 00:08:27.622 06:18:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:27.622 06:18:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:27.622 06:18:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:27.622 06:18:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.622 06:18:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:27.882 06:18:11 event.app_repeat -- event/event.sh@39 -- # killprocess 58648 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58648 ']' 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58648 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.882 06:18:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58648 00:08:27.882 06:18:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.882 06:18:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.882 killing process with pid 58648 00:08:27.882 06:18:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58648' 00:08:27.882 06:18:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58648 00:08:28.143 06:18:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58648 00:08:29.076 spdk_app_start is called in Round 0. 00:08:29.076 Shutdown signal received, stop current app iteration 00:08:29.076 Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 reinitialization... 00:08:29.076 spdk_app_start is called in Round 1. 00:08:29.076 Shutdown signal received, stop current app iteration 00:08:29.076 Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 reinitialization... 00:08:29.076 spdk_app_start is called in Round 2. 00:08:29.076 Shutdown signal received, stop current app iteration 00:08:29.076 Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 reinitialization... 00:08:29.076 spdk_app_start is called in Round 3. 00:08:29.076 Shutdown signal received, stop current app iteration 00:08:29.076 06:18:13 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:29.076 06:18:13 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:29.076 00:08:29.076 real 0m20.639s 00:08:29.076 user 0m44.267s 00:08:29.076 sys 0m3.284s 00:08:29.076 06:18:13 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.076 06:18:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:29.076 ************************************ 00:08:29.076 END TEST app_repeat 00:08:29.076 ************************************ 00:08:29.076 06:18:13 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:29.076 06:18:13 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:29.336 06:18:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.336 06:18:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.336 06:18:13 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.336 ************************************ 00:08:29.336 START TEST cpu_locks 00:08:29.336 ************************************ 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:29.336 * Looking for test storage... 00:08:29.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.336 06:18:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.336 --rc genhtml_branch_coverage=1 00:08:29.336 --rc genhtml_function_coverage=1 00:08:29.336 --rc genhtml_legend=1 00:08:29.336 --rc geninfo_all_blocks=1 00:08:29.336 --rc geninfo_unexecuted_blocks=1 00:08:29.336 00:08:29.336 ' 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.336 --rc genhtml_branch_coverage=1 00:08:29.336 --rc genhtml_function_coverage=1 00:08:29.336 --rc genhtml_legend=1 00:08:29.336 --rc geninfo_all_blocks=1 00:08:29.336 --rc geninfo_unexecuted_blocks=1 00:08:29.336 00:08:29.336 ' 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.336 --rc genhtml_branch_coverage=1 00:08:29.336 --rc genhtml_function_coverage=1 00:08:29.336 --rc genhtml_legend=1 00:08:29.336 --rc geninfo_all_blocks=1 00:08:29.336 --rc geninfo_unexecuted_blocks=1 00:08:29.336 00:08:29.336 ' 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.336 --rc genhtml_branch_coverage=1 00:08:29.336 --rc genhtml_function_coverage=1 00:08:29.336 --rc genhtml_legend=1 00:08:29.336 --rc geninfo_all_blocks=1 00:08:29.336 --rc geninfo_unexecuted_blocks=1 00:08:29.336 00:08:29.336 ' 00:08:29.336 06:18:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:29.336 06:18:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:29.336 06:18:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:29.336 06:18:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.336 06:18:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.336 ************************************ 00:08:29.336 START TEST default_locks 00:08:29.336 ************************************ 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59118 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59118 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59118 ']' 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.336 06:18:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.595 [2024-11-26 06:18:13.556886] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:29.595 [2024-11-26 06:18:13.557029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59118 ] 00:08:29.854 [2024-11-26 06:18:13.739552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.854 [2024-11-26 06:18:13.866188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.792 06:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.792 06:18:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:30.792 06:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59118 00:08:30.792 06:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59118 00:08:30.792 06:18:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59118 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59118 ']' 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59118 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59118 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.051 killing process with pid 59118 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59118' 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59118 00:08:31.051 06:18:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59118 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59118 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59118 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59118 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59118 ']' 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59118) - No such process 00:08:34.347 ERROR: process (pid: 59118) is no longer running 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:34.347 00:08:34.347 real 0m4.436s 00:08:34.347 user 0m4.369s 00:08:34.347 sys 0m0.651s 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.347 06:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.347 ************************************ 00:08:34.347 END TEST default_locks 00:08:34.347 ************************************ 00:08:34.347 06:18:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:34.347 06:18:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.347 06:18:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.347 06:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.347 ************************************ 00:08:34.347 START TEST default_locks_via_rpc 00:08:34.347 ************************************ 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59194 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59194 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59194 ']' 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.347 06:18:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.347 [2024-11-26 06:18:18.063158] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:34.347 [2024-11-26 06:18:18.063291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59194 ] 00:08:34.347 [2024-11-26 06:18:18.222422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.347 [2024-11-26 06:18:18.368737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59194 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59194 00:08:35.282 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:35.542 06:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59194 00:08:35.542 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59194 ']' 00:08:35.542 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59194 00:08:35.542 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59194 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.801 killing process with pid 59194 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59194' 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59194 00:08:35.801 06:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59194 00:08:39.084 00:08:39.084 real 0m4.529s 00:08:39.084 user 0m4.474s 00:08:39.084 sys 0m0.669s 00:08:39.084 06:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.084 06:18:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 ************************************ 00:08:39.084 END TEST default_locks_via_rpc 00:08:39.084 ************************************ 00:08:39.084 06:18:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:39.084 06:18:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.084 06:18:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.084 06:18:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 ************************************ 00:08:39.084 START TEST non_locking_app_on_locked_coremask 00:08:39.084 ************************************ 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59273 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59273 /var/tmp/spdk.sock 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59273 ']' 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.084 06:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:39.084 [2024-11-26 06:18:22.677134] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:39.084 [2024-11-26 06:18:22.677292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59273 ] 00:08:39.084 [2024-11-26 06:18:22.864200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.084 [2024-11-26 06:18:23.016455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59295 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59295 /var/tmp/spdk2.sock 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59295 ']' 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.039 06:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.332 [2024-11-26 06:18:24.184555] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:40.332 [2024-11-26 06:18:24.184705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59295 ] 00:08:40.332 [2024-11-26 06:18:24.368289] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:40.332 [2024-11-26 06:18:24.368378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.591 [2024-11-26 06:18:24.629327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.159 06:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.159 06:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:43.159 06:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59273 00:08:43.159 06:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.159 06:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59273 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59273 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59273 ']' 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59273 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59273 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.746 killing process with pid 59273 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59273' 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59273 00:08:43.746 06:18:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59273 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59295 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59295 ']' 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59295 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59295 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:49.023 killing process with pid 59295 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59295' 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59295 00:08:49.023 06:18:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59295 00:08:51.564 00:08:51.564 real 0m13.048s 00:08:51.564 user 0m13.227s 00:08:51.564 sys 0m1.637s 00:08:51.564 06:18:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.564 06:18:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.564 ************************************ 00:08:51.564 END TEST non_locking_app_on_locked_coremask 00:08:51.564 ************************************ 00:08:51.564 06:18:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:51.564 06:18:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.564 06:18:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.564 06:18:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:51.564 ************************************ 00:08:51.564 START TEST locking_app_on_unlocked_coremask 00:08:51.564 ************************************ 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59454 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59454 /var/tmp/spdk.sock 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59454 ']' 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.564 06:18:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:51.822 [2024-11-26 06:18:35.782803] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:51.822 [2024-11-26 06:18:35.782934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59454 ] 00:08:51.822 [2024-11-26 06:18:35.950404] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:51.822 [2024-11-26 06:18:35.950485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.081 [2024-11-26 06:18:36.070300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59470 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59470 /var/tmp/spdk2.sock 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59470 ']' 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.018 06:18:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.018 [2024-11-26 06:18:37.128143] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:08:53.018 [2024-11-26 06:18:37.128286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59470 ] 00:08:53.277 [2024-11-26 06:18:37.313096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.537 [2024-11-26 06:18:37.577447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.114 06:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.114 06:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:56.114 06:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59470 00:08:56.114 06:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59470 00:08:56.114 06:18:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59454 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59454 ']' 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59454 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59454 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59454' 00:08:56.114 killing process with pid 59454 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59454 00:08:56.114 06:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59454 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59470 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59470 ']' 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59470 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59470 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.474 killing process with pid 59470 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59470' 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59470 00:09:01.474 06:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59470 00:09:04.026 00:09:04.026 real 0m12.368s 00:09:04.026 user 0m12.688s 00:09:04.026 sys 0m1.274s 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.026 ************************************ 00:09:04.026 END TEST locking_app_on_unlocked_coremask 00:09:04.026 ************************************ 00:09:04.026 06:18:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:04.026 06:18:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.026 06:18:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.026 06:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.026 ************************************ 00:09:04.026 START TEST locking_app_on_locked_coremask 00:09:04.026 ************************************ 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59630 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59630 /var/tmp/spdk.sock 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59630 ']' 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.026 06:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.286 [2024-11-26 06:18:48.241147] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:04.286 [2024-11-26 06:18:48.241533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59630 ] 00:09:04.619 [2024-11-26 06:18:48.433755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.619 [2024-11-26 06:18:48.558913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59646 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59646 /var/tmp/spdk2.sock 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59646 /var/tmp/spdk2.sock 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59646 /var/tmp/spdk2.sock 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59646 ']' 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:05.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.570 06:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:05.570 [2024-11-26 06:18:49.580811] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:05.570 [2024-11-26 06:18:49.580950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59646 ] 00:09:05.830 [2024-11-26 06:18:49.763367] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59630 has claimed it. 00:09:05.830 [2024-11-26 06:18:49.763465] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:06.090 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59646) - No such process 00:09:06.090 ERROR: process (pid: 59646) is no longer running 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59630 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59630 00:09:06.090 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59630 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59630 ']' 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59630 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59630 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59630' 00:09:06.659 killing process with pid 59630 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59630 00:09:06.659 06:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59630 00:09:09.197 00:09:09.197 real 0m5.111s 00:09:09.197 user 0m5.283s 00:09:09.197 sys 0m0.895s 00:09:09.197 06:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.197 06:18:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.197 ************************************ 00:09:09.197 END TEST locking_app_on_locked_coremask 00:09:09.197 ************************************ 00:09:09.197 06:18:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:09.197 06:18:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.197 06:18:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.197 06:18:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:09.197 ************************************ 00:09:09.197 START TEST locking_overlapped_coremask 00:09:09.197 ************************************ 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59717 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59717 /var/tmp/spdk.sock 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59717 ']' 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.197 06:18:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.457 [2024-11-26 06:18:53.405139] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:09.457 [2024-11-26 06:18:53.405294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59717 ] 00:09:09.715 [2024-11-26 06:18:53.594539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.715 [2024-11-26 06:18:53.724281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.715 [2024-11-26 06:18:53.724439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.715 [2024-11-26 06:18:53.724477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.653 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.653 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:10.653 06:18:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59745 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59745 /var/tmp/spdk2.sock 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59745 /var/tmp/spdk2.sock 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59745 /var/tmp/spdk2.sock 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59745 ']' 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.654 06:18:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.654 [2024-11-26 06:18:54.784507] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:10.654 [2024-11-26 06:18:54.784661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:09:10.915 [2024-11-26 06:18:54.966441] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59717 has claimed it. 00:09:10.915 [2024-11-26 06:18:54.970163] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:11.482 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59745) - No such process 00:09:11.482 ERROR: process (pid: 59745) is no longer running 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59717 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59717 ']' 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59717 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59717 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59717' 00:09:11.482 killing process with pid 59717 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59717 00:09:11.482 06:18:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59717 00:09:14.016 00:09:14.016 real 0m4.727s 00:09:14.016 user 0m12.808s 00:09:14.016 sys 0m0.645s 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 ************************************ 00:09:14.016 END TEST locking_overlapped_coremask 00:09:14.016 ************************************ 00:09:14.016 06:18:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:14.016 06:18:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.016 06:18:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.016 06:18:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.016 ************************************ 00:09:14.016 START TEST locking_overlapped_coremask_via_rpc 00:09:14.016 ************************************ 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59809 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59809 /var/tmp/spdk.sock 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59809 ']' 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.016 06:18:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.314 [2024-11-26 06:18:58.187493] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:14.314 [2024-11-26 06:18:58.187619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:09:14.314 [2024-11-26 06:18:58.353216] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:14.314 [2024-11-26 06:18:58.353271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.581 [2024-11-26 06:18:58.484691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.581 [2024-11-26 06:18:58.484846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.581 [2024-11-26 06:18:58.484879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59827 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59827 /var/tmp/spdk2.sock 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59827 ']' 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.520 06:18:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.520 [2024-11-26 06:18:59.551023] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:15.520 [2024-11-26 06:18:59.551159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59827 ] 00:09:15.779 [2024-11-26 06:18:59.732947] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:15.779 [2024-11-26 06:18:59.733014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:16.037 [2024-11-26 06:19:00.001091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.037 [2024-11-26 06:19:00.004315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.037 [2024-11-26 06:19:00.004364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.572 [2024-11-26 06:19:02.206348] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59809 has claimed it. 00:09:18.572 request: 00:09:18.572 { 00:09:18.572 "method": "framework_enable_cpumask_locks", 00:09:18.572 "req_id": 1 00:09:18.572 } 00:09:18.572 Got JSON-RPC error response 00:09:18.572 response: 00:09:18.572 { 00:09:18.572 "code": -32603, 00:09:18.572 "message": "Failed to claim CPU core: 2" 00:09:18.572 } 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59809 /var/tmp/spdk.sock 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59809 ']' 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59827 /var/tmp/spdk2.sock 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59827 ']' 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.572 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:18.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:18.573 00:09:18.573 real 0m4.611s 00:09:18.573 user 0m1.405s 00:09:18.573 sys 0m0.211s 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.573 06:19:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.573 ************************************ 00:09:18.573 END TEST locking_overlapped_coremask_via_rpc 00:09:18.573 ************************************ 00:09:18.833 06:19:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:18.833 06:19:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59809 ]] 00:09:18.833 06:19:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59809 00:09:18.833 06:19:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59809 ']' 00:09:18.833 06:19:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59809 00:09:18.833 06:19:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:18.833 06:19:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.833 06:19:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59809 00:09:18.833 killing process with pid 59809 00:09:18.834 06:19:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.834 06:19:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.834 06:19:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59809' 00:09:18.834 06:19:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59809 00:09:18.834 06:19:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59809 00:09:21.375 06:19:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59827 ]] 00:09:21.375 06:19:05 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59827 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59827 ']' 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59827 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59827 00:09:21.375 killing process with pid 59827 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59827' 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59827 00:09:21.375 06:19:05 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59827 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59809 ]] 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59809 00:09:23.912 06:19:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59809 ']' 00:09:23.912 06:19:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59809 00:09:23.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59809) - No such process 00:09:23.912 Process with pid 59809 is not found 00:09:23.912 06:19:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59809 is not found' 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59827 ]] 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59827 00:09:23.912 06:19:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59827 ']' 00:09:23.912 06:19:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59827 00:09:23.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59827) - No such process 00:09:23.912 Process with pid 59827 is not found 00:09:23.912 06:19:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59827 is not found' 00:09:23.912 06:19:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:23.912 00:09:23.912 real 0m54.780s 00:09:23.912 user 1m32.378s 00:09:23.912 sys 0m7.250s 00:09:23.912 06:19:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.912 06:19:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.912 ************************************ 00:09:23.912 END TEST cpu_locks 00:09:23.912 ************************************ 00:09:24.172 ************************************ 00:09:24.172 END TEST event 00:09:24.172 ************************************ 00:09:24.172 00:09:24.172 real 1m28.020s 00:09:24.172 user 2m38.771s 00:09:24.172 sys 0m11.847s 00:09:24.172 06:19:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.172 06:19:08 event -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 06:19:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:24.172 06:19:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.172 06:19:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.172 06:19:08 -- common/autotest_common.sh@10 -- # set +x 00:09:24.172 ************************************ 00:09:24.172 START TEST thread 00:09:24.172 ************************************ 00:09:24.172 06:19:08 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:24.172 * Looking for test storage... 00:09:24.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:24.172 06:19:08 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.172 06:19:08 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.172 06:19:08 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.448 06:19:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.448 06:19:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.448 06:19:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.448 06:19:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.448 06:19:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.448 06:19:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.448 06:19:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.448 06:19:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.448 06:19:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.448 06:19:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.448 06:19:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.448 06:19:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:24.448 06:19:08 thread -- scripts/common.sh@345 -- # : 1 00:09:24.448 06:19:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.448 06:19:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.448 06:19:08 thread -- scripts/common.sh@365 -- # decimal 1 00:09:24.448 06:19:08 thread -- scripts/common.sh@353 -- # local d=1 00:09:24.448 06:19:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.448 06:19:08 thread -- scripts/common.sh@355 -- # echo 1 00:09:24.448 06:19:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.448 06:19:08 thread -- scripts/common.sh@366 -- # decimal 2 00:09:24.448 06:19:08 thread -- scripts/common.sh@353 -- # local d=2 00:09:24.448 06:19:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.448 06:19:08 thread -- scripts/common.sh@355 -- # echo 2 00:09:24.448 06:19:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.448 06:19:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.448 06:19:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.448 06:19:08 thread -- scripts/common.sh@368 -- # return 0 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.448 --rc genhtml_branch_coverage=1 00:09:24.448 --rc genhtml_function_coverage=1 00:09:24.448 --rc genhtml_legend=1 00:09:24.448 --rc geninfo_all_blocks=1 00:09:24.448 --rc geninfo_unexecuted_blocks=1 00:09:24.448 00:09:24.448 ' 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.448 --rc genhtml_branch_coverage=1 00:09:24.448 --rc genhtml_function_coverage=1 00:09:24.448 --rc genhtml_legend=1 00:09:24.448 --rc geninfo_all_blocks=1 00:09:24.448 --rc geninfo_unexecuted_blocks=1 00:09:24.448 00:09:24.448 ' 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.448 --rc genhtml_branch_coverage=1 00:09:24.448 --rc genhtml_function_coverage=1 00:09:24.448 --rc genhtml_legend=1 00:09:24.448 --rc geninfo_all_blocks=1 00:09:24.448 --rc geninfo_unexecuted_blocks=1 00:09:24.448 00:09:24.448 ' 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.448 --rc genhtml_branch_coverage=1 00:09:24.448 --rc genhtml_function_coverage=1 00:09:24.448 --rc genhtml_legend=1 00:09:24.448 --rc geninfo_all_blocks=1 00:09:24.448 --rc geninfo_unexecuted_blocks=1 00:09:24.448 00:09:24.448 ' 00:09:24.448 06:19:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.448 06:19:08 thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.448 ************************************ 00:09:24.448 START TEST thread_poller_perf 00:09:24.448 ************************************ 00:09:24.448 06:19:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:24.448 [2024-11-26 06:19:08.419546] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:24.448 [2024-11-26 06:19:08.419784] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60033 ] 00:09:24.752 [2024-11-26 06:19:08.612372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.752 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:24.752 [2024-11-26 06:19:08.735706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.134 [2024-11-26T06:19:10.271Z] ====================================== 00:09:26.134 [2024-11-26T06:19:10.271Z] busy:2299953184 (cyc) 00:09:26.134 [2024-11-26T06:19:10.271Z] total_run_count: 357000 00:09:26.134 [2024-11-26T06:19:10.271Z] tsc_hz: 2290000000 (cyc) 00:09:26.134 [2024-11-26T06:19:10.271Z] ====================================== 00:09:26.134 [2024-11-26T06:19:10.271Z] poller_cost: 6442 (cyc), 2813 (nsec) 00:09:26.134 00:09:26.134 real 0m1.628s 00:09:26.134 user 0m1.403s 00:09:26.134 sys 0m0.116s 00:09:26.134 06:19:09 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.134 06:19:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:26.134 ************************************ 00:09:26.134 END TEST thread_poller_perf 00:09:26.134 ************************************ 00:09:26.134 06:19:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:26.134 06:19:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:26.134 06:19:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.134 06:19:10 thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.134 ************************************ 00:09:26.134 START TEST thread_poller_perf 00:09:26.134 ************************************ 00:09:26.134 06:19:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:26.134 [2024-11-26 06:19:10.120197] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:26.134 [2024-11-26 06:19:10.120696] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60064 ] 00:09:26.393 [2024-11-26 06:19:10.293876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.393 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:26.393 [2024-11-26 06:19:10.415327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.774 [2024-11-26T06:19:11.911Z] ====================================== 00:09:27.774 [2024-11-26T06:19:11.911Z] busy:2293950082 (cyc) 00:09:27.774 [2024-11-26T06:19:11.911Z] total_run_count: 4725000 00:09:27.774 [2024-11-26T06:19:11.911Z] tsc_hz: 2290000000 (cyc) 00:09:27.774 [2024-11-26T06:19:11.911Z] ====================================== 00:09:27.774 [2024-11-26T06:19:11.911Z] poller_cost: 485 (cyc), 211 (nsec) 00:09:27.774 00:09:27.774 real 0m1.605s 00:09:27.774 user 0m1.392s 00:09:27.774 sys 0m0.103s 00:09:27.774 06:19:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.774 06:19:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:27.774 ************************************ 00:09:27.774 END TEST thread_poller_perf 00:09:27.774 ************************************ 00:09:27.774 06:19:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:27.774 00:09:27.774 real 0m3.599s 00:09:27.774 user 0m2.961s 00:09:27.774 sys 0m0.442s 00:09:27.774 06:19:11 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.774 06:19:11 thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.774 ************************************ 00:09:27.774 END TEST thread 00:09:27.774 ************************************ 00:09:27.774 06:19:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:27.774 06:19:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:27.774 06:19:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.774 06:19:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.774 06:19:11 -- common/autotest_common.sh@10 -- # set +x 00:09:27.774 ************************************ 00:09:27.774 START TEST app_cmdline 00:09:27.774 ************************************ 00:09:27.774 06:19:11 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:27.774 * Looking for test storage... 00:09:28.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:28.033 06:19:11 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.033 06:19:11 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.033 06:19:11 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.033 06:19:11 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.033 06:19:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.033 06:19:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:28.033 06:19:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.033 06:19:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.033 --rc genhtml_branch_coverage=1 00:09:28.033 --rc genhtml_function_coverage=1 00:09:28.033 --rc genhtml_legend=1 00:09:28.033 --rc geninfo_all_blocks=1 00:09:28.033 --rc geninfo_unexecuted_blocks=1 00:09:28.033 00:09:28.033 ' 00:09:28.033 06:19:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.033 --rc genhtml_branch_coverage=1 00:09:28.033 --rc genhtml_function_coverage=1 00:09:28.033 --rc genhtml_legend=1 00:09:28.033 --rc geninfo_all_blocks=1 00:09:28.033 --rc geninfo_unexecuted_blocks=1 00:09:28.033 00:09:28.033 ' 00:09:28.033 06:19:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.033 --rc genhtml_branch_coverage=1 00:09:28.033 --rc genhtml_function_coverage=1 00:09:28.034 --rc genhtml_legend=1 00:09:28.034 --rc geninfo_all_blocks=1 00:09:28.034 --rc geninfo_unexecuted_blocks=1 00:09:28.034 00:09:28.034 ' 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.034 --rc genhtml_branch_coverage=1 00:09:28.034 --rc genhtml_function_coverage=1 00:09:28.034 --rc genhtml_legend=1 00:09:28.034 --rc geninfo_all_blocks=1 00:09:28.034 --rc geninfo_unexecuted_blocks=1 00:09:28.034 00:09:28.034 ' 00:09:28.034 06:19:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:28.034 06:19:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60153 00:09:28.034 06:19:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:28.034 06:19:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60153 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60153 ']' 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.034 06:19:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 [2024-11-26 06:19:12.141367] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:28.034 [2024-11-26 06:19:12.141544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60153 ] 00:09:28.293 [2024-11-26 06:19:12.329593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.553 [2024-11-26 06:19:12.457780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.491 06:19:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.491 06:19:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:29.491 06:19:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:29.750 { 00:09:29.750 "version": "SPDK v25.01-pre git sha1 8afd1c921", 00:09:29.750 "fields": { 00:09:29.750 "major": 25, 00:09:29.750 "minor": 1, 00:09:29.750 "patch": 0, 00:09:29.750 "suffix": "-pre", 00:09:29.750 "commit": "8afd1c921" 00:09:29.750 } 00:09:29.750 } 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:29.750 06:19:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:29.750 06:19:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:30.009 request: 00:09:30.009 { 00:09:30.009 "method": "env_dpdk_get_mem_stats", 00:09:30.009 "req_id": 1 00:09:30.009 } 00:09:30.009 Got JSON-RPC error response 00:09:30.009 response: 00:09:30.009 { 00:09:30.009 "code": -32601, 00:09:30.009 "message": "Method not found" 00:09:30.009 } 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.009 06:19:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60153 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60153 ']' 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60153 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60153 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.009 killing process with pid 60153 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60153' 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@973 -- # kill 60153 00:09:30.009 06:19:14 app_cmdline -- common/autotest_common.sh@978 -- # wait 60153 00:09:33.311 00:09:33.311 real 0m5.159s 00:09:33.311 user 0m5.466s 00:09:33.311 sys 0m0.680s 00:09:33.311 06:19:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.311 ************************************ 00:09:33.311 06:19:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:33.311 END TEST app_cmdline 00:09:33.311 ************************************ 00:09:33.311 06:19:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:33.311 06:19:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.311 06:19:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.311 06:19:16 -- common/autotest_common.sh@10 -- # set +x 00:09:33.311 ************************************ 00:09:33.311 START TEST version 00:09:33.311 ************************************ 00:09:33.311 06:19:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:33.311 * Looking for test storage... 00:09:33.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.311 06:19:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.311 06:19:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.311 06:19:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.311 06:19:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.311 06:19:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.311 06:19:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.311 06:19:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.311 06:19:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.311 06:19:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.311 06:19:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.311 06:19:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.311 06:19:17 version -- scripts/common.sh@344 -- # case "$op" in 00:09:33.311 06:19:17 version -- scripts/common.sh@345 -- # : 1 00:09:33.311 06:19:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.311 06:19:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.311 06:19:17 version -- scripts/common.sh@365 -- # decimal 1 00:09:33.311 06:19:17 version -- scripts/common.sh@353 -- # local d=1 00:09:33.311 06:19:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.311 06:19:17 version -- scripts/common.sh@355 -- # echo 1 00:09:33.311 06:19:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.311 06:19:17 version -- scripts/common.sh@366 -- # decimal 2 00:09:33.311 06:19:17 version -- scripts/common.sh@353 -- # local d=2 00:09:33.311 06:19:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.311 06:19:17 version -- scripts/common.sh@355 -- # echo 2 00:09:33.311 06:19:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.311 06:19:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.311 06:19:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.311 06:19:17 version -- scripts/common.sh@368 -- # return 0 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.311 --rc genhtml_branch_coverage=1 00:09:33.311 --rc genhtml_function_coverage=1 00:09:33.311 --rc genhtml_legend=1 00:09:33.311 --rc geninfo_all_blocks=1 00:09:33.311 --rc geninfo_unexecuted_blocks=1 00:09:33.311 00:09:33.311 ' 00:09:33.311 06:19:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.311 --rc genhtml_branch_coverage=1 00:09:33.311 --rc genhtml_function_coverage=1 00:09:33.311 --rc genhtml_legend=1 00:09:33.311 --rc geninfo_all_blocks=1 00:09:33.312 --rc geninfo_unexecuted_blocks=1 00:09:33.312 00:09:33.312 ' 00:09:33.312 06:19:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.312 --rc genhtml_branch_coverage=1 00:09:33.312 --rc genhtml_function_coverage=1 00:09:33.312 --rc genhtml_legend=1 00:09:33.312 --rc geninfo_all_blocks=1 00:09:33.312 --rc geninfo_unexecuted_blocks=1 00:09:33.312 00:09:33.312 ' 00:09:33.312 06:19:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.312 --rc genhtml_branch_coverage=1 00:09:33.312 --rc genhtml_function_coverage=1 00:09:33.312 --rc genhtml_legend=1 00:09:33.312 --rc geninfo_all_blocks=1 00:09:33.312 --rc geninfo_unexecuted_blocks=1 00:09:33.312 00:09:33.312 ' 00:09:33.312 06:19:17 version -- app/version.sh@17 -- # get_header_version major 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # cut -f2 00:09:33.312 06:19:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.312 06:19:17 version -- app/version.sh@17 -- # major=25 00:09:33.312 06:19:17 version -- app/version.sh@18 -- # get_header_version minor 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # cut -f2 00:09:33.312 06:19:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.312 06:19:17 version -- app/version.sh@18 -- # minor=1 00:09:33.312 06:19:17 version -- app/version.sh@19 -- # get_header_version patch 00:09:33.312 06:19:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # cut -f2 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.312 06:19:17 version -- app/version.sh@19 -- # patch=0 00:09:33.312 06:19:17 version -- app/version.sh@20 -- # get_header_version suffix 00:09:33.312 06:19:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # tr -d '"' 00:09:33.312 06:19:17 version -- app/version.sh@14 -- # cut -f2 00:09:33.312 06:19:17 version -- app/version.sh@20 -- # suffix=-pre 00:09:33.312 06:19:17 version -- app/version.sh@22 -- # version=25.1 00:09:33.312 06:19:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:33.312 06:19:17 version -- app/version.sh@28 -- # version=25.1rc0 00:09:33.312 06:19:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:33.312 06:19:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:33.312 06:19:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:33.312 06:19:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:33.312 00:09:33.312 real 0m0.315s 00:09:33.312 user 0m0.178s 00:09:33.312 sys 0m0.192s 00:09:33.312 06:19:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.312 06:19:17 version -- common/autotest_common.sh@10 -- # set +x 00:09:33.312 ************************************ 00:09:33.312 END TEST version 00:09:33.312 ************************************ 00:09:33.312 06:19:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:33.312 06:19:17 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:09:33.312 06:19:17 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:33.312 06:19:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.312 06:19:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.312 06:19:17 -- common/autotest_common.sh@10 -- # set +x 00:09:33.312 ************************************ 00:09:33.312 START TEST bdev_raid 00:09:33.312 ************************************ 00:09:33.312 06:19:17 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:09:33.571 * Looking for test storage... 00:09:33.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@345 -- # : 1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.571 06:19:17 bdev_raid -- scripts/common.sh@368 -- # return 0 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.571 --rc genhtml_branch_coverage=1 00:09:33.571 --rc genhtml_function_coverage=1 00:09:33.571 --rc genhtml_legend=1 00:09:33.571 --rc geninfo_all_blocks=1 00:09:33.571 --rc geninfo_unexecuted_blocks=1 00:09:33.571 00:09:33.571 ' 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.571 --rc genhtml_branch_coverage=1 00:09:33.571 --rc genhtml_function_coverage=1 00:09:33.571 --rc genhtml_legend=1 00:09:33.571 --rc geninfo_all_blocks=1 00:09:33.571 --rc geninfo_unexecuted_blocks=1 00:09:33.571 00:09:33.571 ' 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.571 --rc genhtml_branch_coverage=1 00:09:33.571 --rc genhtml_function_coverage=1 00:09:33.571 --rc genhtml_legend=1 00:09:33.571 --rc geninfo_all_blocks=1 00:09:33.571 --rc geninfo_unexecuted_blocks=1 00:09:33.571 00:09:33.571 ' 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.571 --rc genhtml_branch_coverage=1 00:09:33.571 --rc genhtml_function_coverage=1 00:09:33.571 --rc genhtml_legend=1 00:09:33.571 --rc geninfo_all_blocks=1 00:09:33.571 --rc geninfo_unexecuted_blocks=1 00:09:33.571 00:09:33.571 ' 00:09:33.571 06:19:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:33.571 06:19:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:09:33.571 06:19:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:09:33.571 06:19:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:09:33.571 06:19:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:09:33.571 06:19:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:09:33.571 06:19:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.571 06:19:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.571 ************************************ 00:09:33.571 START TEST raid1_resize_data_offset_test 00:09:33.571 ************************************ 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60352 00:09:33.571 Process raid pid: 60352 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60352' 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60352 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60352 ']' 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.571 06:19:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.830 [2024-11-26 06:19:17.757295] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:33.830 [2024-11-26 06:19:17.757473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.830 [2024-11-26 06:19:17.944379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.089 [2024-11-26 06:19:18.081396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.348 [2024-11-26 06:19:18.319933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.348 [2024-11-26 06:19:18.319980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.607 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.607 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:09:34.607 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:09:34.607 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.607 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 malloc0 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 malloc1 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 null0 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 [2024-11-26 06:19:18.869159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:09:34.866 [2024-11-26 06:19:18.871439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:34.866 [2024-11-26 06:19:18.871502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:09:34.866 [2024-11-26 06:19:18.871736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:34.866 [2024-11-26 06:19:18.871755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:09:34.866 [2024-11-26 06:19:18.872158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:34.866 [2024-11-26 06:19:18.872413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:34.866 [2024-11-26 06:19:18.872444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:34.866 [2024-11-26 06:19:18.872678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.866 [2024-11-26 06:19:18.933044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.866 06:19:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.801 malloc2 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.801 [2024-11-26 06:19:19.571417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:35.801 [2024-11-26 06:19:19.592257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.801 [2024-11-26 06:19:19.594441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60352 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60352 ']' 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60352 00:09:35.801 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60352 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.802 killing process with pid 60352 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60352' 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60352 00:09:35.802 [2024-11-26 06:19:19.683352] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.802 06:19:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60352 00:09:35.802 [2024-11-26 06:19:19.683720] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:35.802 [2024-11-26 06:19:19.683790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.802 [2024-11-26 06:19:19.683807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:35.802 [2024-11-26 06:19:19.726149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.802 [2024-11-26 06:19:19.726546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.802 [2024-11-26 06:19:19.726574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:37.775 [2024-11-26 06:19:21.880240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.152 06:19:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:39.152 00:09:39.152 real 0m5.533s 00:09:39.152 user 0m5.477s 00:09:39.152 sys 0m0.559s 00:09:39.152 06:19:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.152 06:19:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.152 ************************************ 00:09:39.152 END TEST raid1_resize_data_offset_test 00:09:39.152 ************************************ 00:09:39.152 06:19:23 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:39.152 06:19:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.152 06:19:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.152 06:19:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:39.152 ************************************ 00:09:39.152 START TEST raid0_resize_superblock_test 00:09:39.152 ************************************ 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60446 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:39.152 Process raid pid: 60446 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60446' 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60446 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60446 ']' 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.152 06:19:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.411 [2024-11-26 06:19:23.353895] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:39.411 [2024-11-26 06:19:23.354078] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.670 [2024-11-26 06:19:23.544782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.670 [2024-11-26 06:19:23.686561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.928 [2024-11-26 06:19:23.924632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.928 [2024-11-26 06:19:23.924685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:40.493 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.493 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:40.493 06:19:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:40.493 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.493 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 malloc0 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 [2024-11-26 06:19:24.951081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:41.062 [2024-11-26 06:19:24.951168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.062 [2024-11-26 06:19:24.951202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:41.062 [2024-11-26 06:19:24.951217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.062 [2024-11-26 06:19:24.953849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.062 [2024-11-26 06:19:24.953909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:41.062 pt0 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 035fde7c-a69c-46bd-951a-a4614795bbfa 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 7cb7065c-e588-4cab-a0d2-8394c2a824fd 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 2caedc6a-5bc7-4e60-b6f1-8d66720f7d1e 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 [2024-11-26 06:19:25.093033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7cb7065c-e588-4cab-a0d2-8394c2a824fd is claimed 00:09:41.062 [2024-11-26 06:19:25.093207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2caedc6a-5bc7-4e60-b6f1-8d66720f7d1e is claimed 00:09:41.062 [2024-11-26 06:19:25.093391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:41.062 [2024-11-26 06:19:25.093411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:41.062 [2024-11-26 06:19:25.093755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.062 [2024-11-26 06:19:25.094000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:41.062 [2024-11-26 06:19:25.094014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:41.062 [2024-11-26 06:19:25.094419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.062 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.321 [2024-11-26 06:19:25.205094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.321 [2024-11-26 06:19:25.249014] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:41.321 [2024-11-26 06:19:25.249081] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7cb7065c-e588-4cab-a0d2-8394c2a824fd' was resized: old size 131072, new size 204800 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.321 [2024-11-26 06:19:25.260932] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:41.321 [2024-11-26 06:19:25.260975] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2caedc6a-5bc7-4e60-b6f1-8d66720f7d1e' was resized: old size 131072, new size 204800 00:09:41.321 [2024-11-26 06:19:25.261017] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:41.321 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:41.322 [2024-11-26 06:19:25.376812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.322 [2024-11-26 06:19:25.428463] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:41.322 [2024-11-26 06:19:25.428565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:41.322 [2024-11-26 06:19:25.428580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.322 [2024-11-26 06:19:25.428600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:41.322 [2024-11-26 06:19:25.428762] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.322 [2024-11-26 06:19:25.428807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.322 [2024-11-26 06:19:25.428821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.322 [2024-11-26 06:19:25.440335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:41.322 [2024-11-26 06:19:25.440425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.322 [2024-11-26 06:19:25.440453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:41.322 [2024-11-26 06:19:25.440467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.322 [2024-11-26 06:19:25.443099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.322 [2024-11-26 06:19:25.443150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:41.322 pt0 00:09:41.322 [2024-11-26 06:19:25.445337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7cb7065c-e588-4cab-a0d2-8394c2a824fd 00:09:41.322 [2024-11-26 06:19:25.445416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7cb7065c-e588-4cab-a0d2-8394c2a824fd is claimed 00:09:41.322 [2024-11-26 06:19:25.445547] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2caedc6a-5bc7-4e60-b6f1-8d66720f7d1e 00:09:41.322 [2024-11-26 06:19:25.445572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2caedc6a-5bc7-4e60-b6f1-8d66720f7d1e is claimed 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.322 [2024-11-26 06:19:25.445743] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2caedc6a-5bc7-4e60-b6f1-8d66720f7d1e (2) smaller than existing raid bdev Raid (3) 00:09:41.322 [2024-11-26 06:19:25.445770] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 7cb7065c-e588-4cab-a0d2-8394c2a824fd: File exists 00:09:41.322 [2024-11-26 06:19:25.445817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:41.322 [2024-11-26 06:19:25.445831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:41.322 [2024-11-26 06:19:25.446136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:41.322 [2024-11-26 06:19:25.446333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:41.322 [2024-11-26 06:19:25.446343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.322 [2024-11-26 06:19:25.446516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.322 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.581 [2024-11-26 06:19:25.468617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60446 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60446 ']' 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60446 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60446 00:09:41.581 killing process with pid 60446 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60446' 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60446 00:09:41.581 [2024-11-26 06:19:25.545495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.581 [2024-11-26 06:19:25.545612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.581 [2024-11-26 06:19:25.545670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.581 [2024-11-26 06:19:25.545681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:41.581 06:19:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60446 00:09:43.484 [2024-11-26 06:19:27.226610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.420 06:19:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:44.420 00:09:44.420 real 0m5.257s 00:09:44.420 user 0m5.573s 00:09:44.420 sys 0m0.588s 00:09:44.420 06:19:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.420 ************************************ 00:09:44.420 END TEST raid0_resize_superblock_test 00:09:44.420 ************************************ 00:09:44.420 06:19:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.680 06:19:28 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:44.680 06:19:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.680 06:19:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.680 06:19:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.680 ************************************ 00:09:44.680 START TEST raid1_resize_superblock_test 00:09:44.680 ************************************ 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60550 00:09:44.680 Process raid pid: 60550 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60550' 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60550 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60550 ']' 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.680 06:19:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.680 [2024-11-26 06:19:28.682904] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:44.680 [2024-11-26 06:19:28.683137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.957 [2024-11-26 06:19:28.876841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.957 [2024-11-26 06:19:29.012431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.214 [2024-11-26 06:19:29.251295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.214 [2024-11-26 06:19:29.251348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.473 06:19:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.473 06:19:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.473 06:19:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:45.473 06:19:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.473 06:19:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 malloc0 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 [2024-11-26 06:19:30.199161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:46.410 [2024-11-26 06:19:30.199257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.410 [2024-11-26 06:19:30.199289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:46.410 [2024-11-26 06:19:30.199306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.410 [2024-11-26 06:19:30.201950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.410 [2024-11-26 06:19:30.202003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:46.410 pt0 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 73b0d5eb-ef62-4151-8354-db214821c5ad 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 44f264db-c768-43b8-b928-d2d901c3df5f 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 20b16ac5-40a5-44d2-b6f8-bfd404adc0d7 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 [2024-11-26 06:19:30.336774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44f264db-c768-43b8-b928-d2d901c3df5f is claimed 00:09:46.410 [2024-11-26 06:19:30.336932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 20b16ac5-40a5-44d2-b6f8-bfd404adc0d7 is claimed 00:09:46.410 [2024-11-26 06:19:30.337129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:46.410 [2024-11-26 06:19:30.337150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:46.410 [2024-11-26 06:19:30.337526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:46.410 [2024-11-26 06:19:30.337788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:46.410 [2024-11-26 06:19:30.337803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:46.410 [2024-11-26 06:19:30.338039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 [2024-11-26 06:19:30.448824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 [2024-11-26 06:19:30.496717] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:46.410 [2024-11-26 06:19:30.496767] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '44f264db-c768-43b8-b928-d2d901c3df5f' was resized: old size 131072, new size 204800 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.410 [2024-11-26 06:19:30.508630] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:46.410 [2024-11-26 06:19:30.508674] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '20b16ac5-40a5-44d2-b6f8-bfd404adc0d7' was resized: old size 131072, new size 204800 00:09:46.410 [2024-11-26 06:19:30.508711] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:46.410 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:46.669 [2024-11-26 06:19:30.620526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:46.669 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.670 [2024-11-26 06:19:30.668220] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:46.670 [2024-11-26 06:19:30.668390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:46.670 [2024-11-26 06:19:30.668471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:46.670 [2024-11-26 06:19:30.668698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.670 [2024-11-26 06:19:30.668998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.670 [2024-11-26 06:19:30.669150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.670 [2024-11-26 06:19:30.669217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.670 [2024-11-26 06:19:30.680087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:46.670 [2024-11-26 06:19:30.680230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.670 [2024-11-26 06:19:30.680303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:46.670 [2024-11-26 06:19:30.680355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.670 [2024-11-26 06:19:30.682980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.670 [2024-11-26 06:19:30.683096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:46.670 pt0 00:09:46.670 [2024-11-26 06:19:30.685299] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 44f264db-c768-43b8-b928-d2d901c3df5f 00:09:46.670 [2024-11-26 06:19:30.685487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 44f264db-c768-43b8-b928-d2d901c3df5f 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.670 is claimed 00:09:46.670 [2024-11-26 06:19:30.685752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 20b16ac5-40a5-44d2-b6f8-bfd404adc0d7 00:09:46.670 [2024-11-26 06:19:30.685832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 20b16ac5-40a5-44d2-b6f8-bfd404adc0d7 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:46.670 is claimed 00:09:46.670 [2024-11-26 06:19:30.686098] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 20b16ac5-40a5-44d2-b6f8-bfd404adc0d7 (2) smaller than existing raid bdev Raid (3) 00:09:46.670 [2024-11-26 06:19:30.686173] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 44f264db-c768-43b8-b928-d2 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.670 d901c3df5f: File exists 00:09:46.670 [2024-11-26 06:19:30.686299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:46.670 [2024-11-26 06:19:30.686342] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.670 [2024-11-26 06:19:30.686670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:46.670 [2024-11-26 06:19:30.686925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:46.670 [2024-11-26 06:19:30.686974] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:46.670 [2024-11-26 06:19:30.687246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.670 [2024-11-26 06:19:30.708424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60550 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60550 ']' 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60550 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60550 00:09:46.670 killing process with pid 60550 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60550' 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60550 00:09:46.670 [2024-11-26 06:19:30.792983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.670 [2024-11-26 06:19:30.793103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.670 06:19:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60550 00:09:46.670 [2024-11-26 06:19:30.793171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.670 [2024-11-26 06:19:30.793181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:48.576 [2024-11-26 06:19:32.445401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.953 ************************************ 00:09:49.953 END TEST raid1_resize_superblock_test 00:09:49.953 ************************************ 00:09:49.953 06:19:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:49.953 00:09:49.953 real 0m5.162s 00:09:49.953 user 0m5.403s 00:09:49.953 sys 0m0.636s 00:09:49.953 06:19:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.953 06:19:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.953 06:19:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:49.953 06:19:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:49.953 06:19:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:49.953 06:19:33 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:49.953 06:19:33 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:49.953 06:19:33 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:49.953 06:19:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:49.953 06:19:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.953 06:19:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.953 ************************************ 00:09:49.953 START TEST raid_function_test_raid0 00:09:49.953 ************************************ 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60660 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60660' 00:09:49.953 Process raid pid: 60660 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60660 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60660 ']' 00:09:49.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.953 06:19:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:49.953 [2024-11-26 06:19:33.922082] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:49.953 [2024-11-26 06:19:33.922229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.212 [2024-11-26 06:19:34.104980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.212 [2024-11-26 06:19:34.257488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.470 [2024-11-26 06:19:34.530496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.470 [2024-11-26 06:19:34.530567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.729 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.729 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:09:50.729 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:50.729 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.729 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:50.988 Base_1 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:50.988 Base_2 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:50.988 [2024-11-26 06:19:34.939332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:50.988 [2024-11-26 06:19:34.941892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:50.988 [2024-11-26 06:19:34.942093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:50.988 [2024-11-26 06:19:34.942145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:50.988 [2024-11-26 06:19:34.942591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:50.988 [2024-11-26 06:19:34.942843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:50.988 [2024-11-26 06:19:34.942890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:50.988 [2024-11-26 06:19:34.943257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:50.988 06:19:34 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:50.988 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:50.988 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:51.247 [2024-11-26 06:19:35.219013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:51.247 /dev/nbd0 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.247 1+0 records in 00:09:51.247 1+0 records out 00:09:51.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577266 s, 7.1 MB/s 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:51.247 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.506 { 00:09:51.506 "nbd_device": "/dev/nbd0", 00:09:51.506 "bdev_name": "raid" 00:09:51.506 } 00:09:51.506 ]' 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.506 { 00:09:51.506 "nbd_device": "/dev/nbd0", 00:09:51.506 "bdev_name": "raid" 00:09:51.506 } 00:09:51.506 ]' 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:51.506 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:51.785 4096+0 records in 00:09:51.785 4096+0 records out 00:09:51.785 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.035708 s, 58.7 MB/s 00:09:51.785 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:52.078 4096+0 records in 00:09:52.078 4096+0 records out 00:09:52.078 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.243937 s, 8.6 MB/s 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:52.078 128+0 records in 00:09:52.078 128+0 records out 00:09:52.078 65536 bytes (66 kB, 64 KiB) copied, 0.000785915 s, 83.4 MB/s 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:52.078 2035+0 records in 00:09:52.078 2035+0 records out 00:09:52.078 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0159311 s, 65.4 MB/s 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:52.078 06:19:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:52.078 456+0 records in 00:09:52.078 456+0 records out 00:09:52.078 233472 bytes (233 kB, 228 KiB) copied, 0.00427247 s, 54.6 MB/s 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.078 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.339 [2024-11-26 06:19:36.313066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:52.339 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60660 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60660 ']' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60660 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60660 00:09:52.599 killing process with pid 60660 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60660' 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60660 00:09:52.599 [2024-11-26 06:19:36.677980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.599 [2024-11-26 06:19:36.678131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.599 06:19:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60660 00:09:52.599 [2024-11-26 06:19:36.678192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.599 [2024-11-26 06:19:36.678210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:52.858 [2024-11-26 06:19:36.917722] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.239 06:19:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:54.239 00:09:54.239 real 0m4.365s 00:09:54.239 user 0m4.986s 00:09:54.239 sys 0m1.164s 00:09:54.239 06:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.239 ************************************ 00:09:54.239 END TEST raid_function_test_raid0 00:09:54.239 ************************************ 00:09:54.239 06:19:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:54.239 06:19:38 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:54.239 06:19:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.239 06:19:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.239 06:19:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.239 ************************************ 00:09:54.239 START TEST raid_function_test_concat 00:09:54.239 ************************************ 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:54.239 Process raid pid: 60794 00:09:54.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60794 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60794' 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60794 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60794 ']' 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.239 06:19:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:54.239 [2024-11-26 06:19:38.347411] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:54.240 [2024-11-26 06:19:38.347551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.499 [2024-11-26 06:19:38.509887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.760 [2024-11-26 06:19:38.683767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.019 [2024-11-26 06:19:38.954308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.019 [2024-11-26 06:19:38.954372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:55.278 Base_1 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:55.278 Base_2 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:55.278 [2024-11-26 06:19:39.352431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:55.278 [2024-11-26 06:19:39.354868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:55.278 [2024-11-26 06:19:39.355038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:55.278 [2024-11-26 06:19:39.355124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:55.278 [2024-11-26 06:19:39.355535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.278 [2024-11-26 06:19:39.355800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:55.278 [2024-11-26 06:19:39.355848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:55.278 [2024-11-26 06:19:39.356157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:55.278 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:55.537 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:55.537 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:55.537 [2024-11-26 06:19:39.664070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:55.798 /dev/nbd0 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.798 1+0 records in 00:09:55.798 1+0 records out 00:09:55.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432474 s, 9.5 MB/s 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:55.798 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:56.057 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:56.057 { 00:09:56.057 "nbd_device": "/dev/nbd0", 00:09:56.057 "bdev_name": "raid" 00:09:56.057 } 00:09:56.057 ]' 00:09:56.057 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:56.057 { 00:09:56.057 "nbd_device": "/dev/nbd0", 00:09:56.057 "bdev_name": "raid" 00:09:56.057 } 00:09:56.057 ]' 00:09:56.057 06:19:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:56.057 4096+0 records in 00:09:56.057 4096+0 records out 00:09:56.057 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0353243 s, 59.4 MB/s 00:09:56.057 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:56.317 4096+0 records in 00:09:56.317 4096+0 records out 00:09:56.317 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.25722 s, 8.2 MB/s 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:56.317 128+0 records in 00:09:56.317 128+0 records out 00:09:56.317 65536 bytes (66 kB, 64 KiB) copied, 0.00110112 s, 59.5 MB/s 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:56.317 2035+0 records in 00:09:56.317 2035+0 records out 00:09:56.317 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0145049 s, 71.8 MB/s 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:56.317 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:56.576 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:56.576 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:56.577 456+0 records in 00:09:56.577 456+0 records out 00:09:56.577 233472 bytes (233 kB, 228 KiB) copied, 0.00395602 s, 59.0 MB/s 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.577 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:56.836 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.836 [2024-11-26 06:19:40.750860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.836 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.836 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.836 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.836 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.837 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.837 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:56.837 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.837 06:19:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:56.837 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:56.837 06:19:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60794 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60794 ']' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60794 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60794 00:09:57.097 killing process with pid 60794 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60794' 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60794 00:09:57.097 [2024-11-26 06:19:41.123294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:57.097 06:19:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60794 00:09:57.097 [2024-11-26 06:19:41.123450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.097 [2024-11-26 06:19:41.123521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.097 [2024-11-26 06:19:41.123536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:57.356 [2024-11-26 06:19:41.376901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.737 06:19:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:58.737 00:09:58.737 real 0m4.419s 00:09:58.737 user 0m5.082s 00:09:58.737 sys 0m1.184s 00:09:58.737 06:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.737 06:19:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:58.737 ************************************ 00:09:58.737 END TEST raid_function_test_concat 00:09:58.737 ************************************ 00:09:58.737 06:19:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:58.737 06:19:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.737 06:19:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.737 06:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:58.737 ************************************ 00:09:58.737 START TEST raid0_resize_test 00:09:58.737 ************************************ 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60923 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60923' 00:09:58.737 Process raid pid: 60923 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60923 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60923 ']' 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.737 06:19:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.737 [2024-11-26 06:19:42.833603] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:09:58.737 [2024-11-26 06:19:42.833830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.023 [2024-11-26 06:19:43.010952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.281 [2024-11-26 06:19:43.167107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.539 [2024-11-26 06:19:43.413379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.539 [2024-11-26 06:19:43.413554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 Base_1 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 Base_2 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 [2024-11-26 06:19:43.725877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:59.798 [2024-11-26 06:19:43.728388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:59.798 [2024-11-26 06:19:43.728509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:59.798 [2024-11-26 06:19:43.728575] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:59.798 [2024-11-26 06:19:43.728955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:59.798 [2024-11-26 06:19:43.729179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:59.798 [2024-11-26 06:19:43.729227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:59.798 [2024-11-26 06:19:43.729513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 [2024-11-26 06:19:43.737819] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:59.798 [2024-11-26 06:19:43.737898] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:59.798 true 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 [2024-11-26 06:19:43.754061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 [2024-11-26 06:19:43.801775] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:59.798 [2024-11-26 06:19:43.801899] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:59.798 [2024-11-26 06:19:43.801968] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:59.798 true 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:59.798 [2024-11-26 06:19:43.817963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60923 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60923 ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60923 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60923 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60923' 00:09:59.798 killing process with pid 60923 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60923 00:09:59.798 [2024-11-26 06:19:43.903501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.798 06:19:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60923 00:09:59.798 [2024-11-26 06:19:43.903741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.798 [2024-11-26 06:19:43.903881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.798 [2024-11-26 06:19:43.903955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:59.798 [2024-11-26 06:19:43.924063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.177 06:19:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:01.177 00:10:01.177 real 0m2.522s 00:10:01.177 user 0m2.592s 00:10:01.177 sys 0m0.447s 00:10:01.177 06:19:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.177 06:19:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.177 ************************************ 00:10:01.177 END TEST raid0_resize_test 00:10:01.177 ************************************ 00:10:01.435 06:19:45 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:01.435 06:19:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.435 06:19:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.435 06:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.435 ************************************ 00:10:01.435 START TEST raid1_resize_test 00:10:01.435 ************************************ 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:01.435 Process raid pid: 60985 00:10:01.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60985 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60985' 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60985 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60985 ']' 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.435 06:19:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.435 [2024-11-26 06:19:45.428961] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:01.435 [2024-11-26 06:19:45.429214] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.692 [2024-11-26 06:19:45.590800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.692 [2024-11-26 06:19:45.744933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.949 [2024-11-26 06:19:46.015812] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.949 [2024-11-26 06:19:46.015990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.208 Base_1 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.208 Base_2 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.208 [2024-11-26 06:19:46.325519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:02.208 [2024-11-26 06:19:46.327844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:02.208 [2024-11-26 06:19:46.327974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:02.208 [2024-11-26 06:19:46.328029] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:02.208 [2024-11-26 06:19:46.328425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:02.208 [2024-11-26 06:19:46.328648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:02.208 [2024-11-26 06:19:46.328693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:02.208 [2024-11-26 06:19:46.329023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.208 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.208 [2024-11-26 06:19:46.337517] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:02.208 [2024-11-26 06:19:46.337631] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:02.467 true 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.467 [2024-11-26 06:19:46.353735] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.467 [2024-11-26 06:19:46.397441] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:02.467 [2024-11-26 06:19:46.397582] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:02.467 [2024-11-26 06:19:46.397680] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:02.467 true 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.467 [2024-11-26 06:19:46.409615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60985 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60985 ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60985 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60985 00:10:02.467 killing process with pid 60985 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60985' 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60985 00:10:02.467 [2024-11-26 06:19:46.492892] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.467 [2024-11-26 06:19:46.493020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.467 06:19:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60985 00:10:02.467 [2024-11-26 06:19:46.493685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.467 [2024-11-26 06:19:46.493721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:02.467 [2024-11-26 06:19:46.512900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.841 ************************************ 00:10:03.841 END TEST raid1_resize_test 00:10:03.841 ************************************ 00:10:03.841 06:19:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:03.841 00:10:03.841 real 0m2.489s 00:10:03.841 user 0m2.561s 00:10:03.841 sys 0m0.435s 00:10:03.841 06:19:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.841 06:19:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.841 06:19:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:03.841 06:19:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:03.841 06:19:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:03.841 06:19:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:03.841 06:19:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.841 06:19:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.841 ************************************ 00:10:03.841 START TEST raid_state_function_test 00:10:03.841 ************************************ 00:10:03.841 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:03.841 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:03.841 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61047 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.842 Process raid pid: 61047 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61047' 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61047 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61047 ']' 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.842 06:19:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.099 [2024-11-26 06:19:47.989702] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:04.100 [2024-11-26 06:19:47.989828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.100 [2024-11-26 06:19:48.163196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.358 [2024-11-26 06:19:48.316134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.621 [2024-11-26 06:19:48.571805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.621 [2024-11-26 06:19:48.571867] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.881 [2024-11-26 06:19:48.854820] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.881 [2024-11-26 06:19:48.854903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.881 [2024-11-26 06:19:48.854915] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.881 [2024-11-26 06:19:48.854927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.881 "name": "Existed_Raid", 00:10:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.881 "strip_size_kb": 64, 00:10:04.881 "state": "configuring", 00:10:04.881 "raid_level": "raid0", 00:10:04.881 "superblock": false, 00:10:04.881 "num_base_bdevs": 2, 00:10:04.881 "num_base_bdevs_discovered": 0, 00:10:04.881 "num_base_bdevs_operational": 2, 00:10:04.881 "base_bdevs_list": [ 00:10:04.881 { 00:10:04.881 "name": "BaseBdev1", 00:10:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.881 "is_configured": false, 00:10:04.881 "data_offset": 0, 00:10:04.881 "data_size": 0 00:10:04.881 }, 00:10:04.881 { 00:10:04.881 "name": "BaseBdev2", 00:10:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.881 "is_configured": false, 00:10:04.881 "data_offset": 0, 00:10:04.881 "data_size": 0 00:10:04.881 } 00:10:04.881 ] 00:10:04.881 }' 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.881 06:19:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 [2024-11-26 06:19:49.329959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.449 [2024-11-26 06:19:49.330127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 [2024-11-26 06:19:49.341920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.449 [2024-11-26 06:19:49.342059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.449 [2024-11-26 06:19:49.342093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.449 [2024-11-26 06:19:49.342122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 BaseBdev1 00:10:05.449 [2024-11-26 06:19:49.399345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 [ 00:10:05.449 { 00:10:05.449 "name": "BaseBdev1", 00:10:05.449 "aliases": [ 00:10:05.449 "f3004043-25d8-4ddc-b525-fd2511df9dd4" 00:10:05.449 ], 00:10:05.449 "product_name": "Malloc disk", 00:10:05.449 "block_size": 512, 00:10:05.449 "num_blocks": 65536, 00:10:05.449 "uuid": "f3004043-25d8-4ddc-b525-fd2511df9dd4", 00:10:05.449 "assigned_rate_limits": { 00:10:05.449 "rw_ios_per_sec": 0, 00:10:05.449 "rw_mbytes_per_sec": 0, 00:10:05.449 "r_mbytes_per_sec": 0, 00:10:05.449 "w_mbytes_per_sec": 0 00:10:05.449 }, 00:10:05.449 "claimed": true, 00:10:05.449 "claim_type": "exclusive_write", 00:10:05.449 "zoned": false, 00:10:05.449 "supported_io_types": { 00:10:05.449 "read": true, 00:10:05.449 "write": true, 00:10:05.449 "unmap": true, 00:10:05.449 "flush": true, 00:10:05.449 "reset": true, 00:10:05.449 "nvme_admin": false, 00:10:05.449 "nvme_io": false, 00:10:05.449 "nvme_io_md": false, 00:10:05.449 "write_zeroes": true, 00:10:05.449 "zcopy": true, 00:10:05.449 "get_zone_info": false, 00:10:05.449 "zone_management": false, 00:10:05.449 "zone_append": false, 00:10:05.449 "compare": false, 00:10:05.449 "compare_and_write": false, 00:10:05.449 "abort": true, 00:10:05.449 "seek_hole": false, 00:10:05.449 "seek_data": false, 00:10:05.449 "copy": true, 00:10:05.449 "nvme_iov_md": false 00:10:05.449 }, 00:10:05.449 "memory_domains": [ 00:10:05.449 { 00:10:05.449 "dma_device_id": "system", 00:10:05.449 "dma_device_type": 1 00:10:05.449 }, 00:10:05.449 { 00:10:05.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.449 "dma_device_type": 2 00:10:05.449 } 00:10:05.449 ], 00:10:05.449 "driver_specific": {} 00:10:05.449 } 00:10:05.449 ] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.449 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.449 "name": "Existed_Raid", 00:10:05.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.449 "strip_size_kb": 64, 00:10:05.449 "state": "configuring", 00:10:05.449 "raid_level": "raid0", 00:10:05.449 "superblock": false, 00:10:05.449 "num_base_bdevs": 2, 00:10:05.449 "num_base_bdevs_discovered": 1, 00:10:05.449 "num_base_bdevs_operational": 2, 00:10:05.449 "base_bdevs_list": [ 00:10:05.449 { 00:10:05.449 "name": "BaseBdev1", 00:10:05.449 "uuid": "f3004043-25d8-4ddc-b525-fd2511df9dd4", 00:10:05.449 "is_configured": true, 00:10:05.449 "data_offset": 0, 00:10:05.449 "data_size": 65536 00:10:05.449 }, 00:10:05.449 { 00:10:05.449 "name": "BaseBdev2", 00:10:05.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.449 "is_configured": false, 00:10:05.449 "data_offset": 0, 00:10:05.449 "data_size": 0 00:10:05.449 } 00:10:05.449 ] 00:10:05.450 }' 00:10:05.450 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.450 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.017 [2024-11-26 06:19:49.862664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.017 [2024-11-26 06:19:49.862846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.017 [2024-11-26 06:19:49.874699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.017 [2024-11-26 06:19:49.877166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.017 [2024-11-26 06:19:49.877299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.017 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.018 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.018 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.018 "name": "Existed_Raid", 00:10:06.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.018 "strip_size_kb": 64, 00:10:06.018 "state": "configuring", 00:10:06.018 "raid_level": "raid0", 00:10:06.018 "superblock": false, 00:10:06.018 "num_base_bdevs": 2, 00:10:06.018 "num_base_bdevs_discovered": 1, 00:10:06.018 "num_base_bdevs_operational": 2, 00:10:06.018 "base_bdevs_list": [ 00:10:06.018 { 00:10:06.018 "name": "BaseBdev1", 00:10:06.018 "uuid": "f3004043-25d8-4ddc-b525-fd2511df9dd4", 00:10:06.018 "is_configured": true, 00:10:06.018 "data_offset": 0, 00:10:06.018 "data_size": 65536 00:10:06.018 }, 00:10:06.018 { 00:10:06.018 "name": "BaseBdev2", 00:10:06.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.018 "is_configured": false, 00:10:06.018 "data_offset": 0, 00:10:06.018 "data_size": 0 00:10:06.018 } 00:10:06.018 ] 00:10:06.018 }' 00:10:06.018 06:19:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.018 06:19:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.277 [2024-11-26 06:19:50.328629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.277 [2024-11-26 06:19:50.328845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.277 [2024-11-26 06:19:50.328880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:06.277 [2024-11-26 06:19:50.329331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:06.277 [2024-11-26 06:19:50.329609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.277 [2024-11-26 06:19:50.329666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.277 [2024-11-26 06:19:50.330106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.277 BaseBdev2 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.277 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.278 [ 00:10:06.278 { 00:10:06.278 "name": "BaseBdev2", 00:10:06.278 "aliases": [ 00:10:06.278 "431284be-4cff-4f1b-840e-3b1bc5864a48" 00:10:06.278 ], 00:10:06.278 "product_name": "Malloc disk", 00:10:06.278 "block_size": 512, 00:10:06.278 "num_blocks": 65536, 00:10:06.278 "uuid": "431284be-4cff-4f1b-840e-3b1bc5864a48", 00:10:06.278 "assigned_rate_limits": { 00:10:06.278 "rw_ios_per_sec": 0, 00:10:06.278 "rw_mbytes_per_sec": 0, 00:10:06.278 "r_mbytes_per_sec": 0, 00:10:06.278 "w_mbytes_per_sec": 0 00:10:06.278 }, 00:10:06.278 "claimed": true, 00:10:06.278 "claim_type": "exclusive_write", 00:10:06.278 "zoned": false, 00:10:06.278 "supported_io_types": { 00:10:06.278 "read": true, 00:10:06.278 "write": true, 00:10:06.278 "unmap": true, 00:10:06.278 "flush": true, 00:10:06.278 "reset": true, 00:10:06.278 "nvme_admin": false, 00:10:06.278 "nvme_io": false, 00:10:06.278 "nvme_io_md": false, 00:10:06.278 "write_zeroes": true, 00:10:06.278 "zcopy": true, 00:10:06.278 "get_zone_info": false, 00:10:06.278 "zone_management": false, 00:10:06.278 "zone_append": false, 00:10:06.278 "compare": false, 00:10:06.278 "compare_and_write": false, 00:10:06.278 "abort": true, 00:10:06.278 "seek_hole": false, 00:10:06.278 "seek_data": false, 00:10:06.278 "copy": true, 00:10:06.278 "nvme_iov_md": false 00:10:06.278 }, 00:10:06.278 "memory_domains": [ 00:10:06.278 { 00:10:06.278 "dma_device_id": "system", 00:10:06.278 "dma_device_type": 1 00:10:06.278 }, 00:10:06.278 { 00:10:06.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.278 "dma_device_type": 2 00:10:06.278 } 00:10:06.278 ], 00:10:06.278 "driver_specific": {} 00:10:06.278 } 00:10:06.278 ] 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.278 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.537 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.537 "name": "Existed_Raid", 00:10:06.537 "uuid": "f7a84da6-a93c-4e31-b7c2-69b39d054dd2", 00:10:06.537 "strip_size_kb": 64, 00:10:06.537 "state": "online", 00:10:06.537 "raid_level": "raid0", 00:10:06.537 "superblock": false, 00:10:06.537 "num_base_bdevs": 2, 00:10:06.537 "num_base_bdevs_discovered": 2, 00:10:06.537 "num_base_bdevs_operational": 2, 00:10:06.537 "base_bdevs_list": [ 00:10:06.537 { 00:10:06.537 "name": "BaseBdev1", 00:10:06.537 "uuid": "f3004043-25d8-4ddc-b525-fd2511df9dd4", 00:10:06.537 "is_configured": true, 00:10:06.537 "data_offset": 0, 00:10:06.537 "data_size": 65536 00:10:06.537 }, 00:10:06.537 { 00:10:06.537 "name": "BaseBdev2", 00:10:06.537 "uuid": "431284be-4cff-4f1b-840e-3b1bc5864a48", 00:10:06.537 "is_configured": true, 00:10:06.537 "data_offset": 0, 00:10:06.537 "data_size": 65536 00:10:06.537 } 00:10:06.537 ] 00:10:06.537 }' 00:10:06.537 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.537 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.796 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.797 [2024-11-26 06:19:50.876233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.797 "name": "Existed_Raid", 00:10:06.797 "aliases": [ 00:10:06.797 "f7a84da6-a93c-4e31-b7c2-69b39d054dd2" 00:10:06.797 ], 00:10:06.797 "product_name": "Raid Volume", 00:10:06.797 "block_size": 512, 00:10:06.797 "num_blocks": 131072, 00:10:06.797 "uuid": "f7a84da6-a93c-4e31-b7c2-69b39d054dd2", 00:10:06.797 "assigned_rate_limits": { 00:10:06.797 "rw_ios_per_sec": 0, 00:10:06.797 "rw_mbytes_per_sec": 0, 00:10:06.797 "r_mbytes_per_sec": 0, 00:10:06.797 "w_mbytes_per_sec": 0 00:10:06.797 }, 00:10:06.797 "claimed": false, 00:10:06.797 "zoned": false, 00:10:06.797 "supported_io_types": { 00:10:06.797 "read": true, 00:10:06.797 "write": true, 00:10:06.797 "unmap": true, 00:10:06.797 "flush": true, 00:10:06.797 "reset": true, 00:10:06.797 "nvme_admin": false, 00:10:06.797 "nvme_io": false, 00:10:06.797 "nvme_io_md": false, 00:10:06.797 "write_zeroes": true, 00:10:06.797 "zcopy": false, 00:10:06.797 "get_zone_info": false, 00:10:06.797 "zone_management": false, 00:10:06.797 "zone_append": false, 00:10:06.797 "compare": false, 00:10:06.797 "compare_and_write": false, 00:10:06.797 "abort": false, 00:10:06.797 "seek_hole": false, 00:10:06.797 "seek_data": false, 00:10:06.797 "copy": false, 00:10:06.797 "nvme_iov_md": false 00:10:06.797 }, 00:10:06.797 "memory_domains": [ 00:10:06.797 { 00:10:06.797 "dma_device_id": "system", 00:10:06.797 "dma_device_type": 1 00:10:06.797 }, 00:10:06.797 { 00:10:06.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.797 "dma_device_type": 2 00:10:06.797 }, 00:10:06.797 { 00:10:06.797 "dma_device_id": "system", 00:10:06.797 "dma_device_type": 1 00:10:06.797 }, 00:10:06.797 { 00:10:06.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.797 "dma_device_type": 2 00:10:06.797 } 00:10:06.797 ], 00:10:06.797 "driver_specific": { 00:10:06.797 "raid": { 00:10:06.797 "uuid": "f7a84da6-a93c-4e31-b7c2-69b39d054dd2", 00:10:06.797 "strip_size_kb": 64, 00:10:06.797 "state": "online", 00:10:06.797 "raid_level": "raid0", 00:10:06.797 "superblock": false, 00:10:06.797 "num_base_bdevs": 2, 00:10:06.797 "num_base_bdevs_discovered": 2, 00:10:06.797 "num_base_bdevs_operational": 2, 00:10:06.797 "base_bdevs_list": [ 00:10:06.797 { 00:10:06.797 "name": "BaseBdev1", 00:10:06.797 "uuid": "f3004043-25d8-4ddc-b525-fd2511df9dd4", 00:10:06.797 "is_configured": true, 00:10:06.797 "data_offset": 0, 00:10:06.797 "data_size": 65536 00:10:06.797 }, 00:10:06.797 { 00:10:06.797 "name": "BaseBdev2", 00:10:06.797 "uuid": "431284be-4cff-4f1b-840e-3b1bc5864a48", 00:10:06.797 "is_configured": true, 00:10:06.797 "data_offset": 0, 00:10:06.797 "data_size": 65536 00:10:06.797 } 00:10:06.797 ] 00:10:06.797 } 00:10:06.797 } 00:10:06.797 }' 00:10:06.797 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.056 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.056 BaseBdev2' 00:10:07.056 06:19:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.056 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.056 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.056 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.056 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.056 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.056 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.057 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.057 [2024-11-26 06:19:51.111568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.057 [2024-11-26 06:19:51.111702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.057 [2024-11-26 06:19:51.111804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.315 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.315 "name": "Existed_Raid", 00:10:07.315 "uuid": "f7a84da6-a93c-4e31-b7c2-69b39d054dd2", 00:10:07.315 "strip_size_kb": 64, 00:10:07.315 "state": "offline", 00:10:07.316 "raid_level": "raid0", 00:10:07.316 "superblock": false, 00:10:07.316 "num_base_bdevs": 2, 00:10:07.316 "num_base_bdevs_discovered": 1, 00:10:07.316 "num_base_bdevs_operational": 1, 00:10:07.316 "base_bdevs_list": [ 00:10:07.316 { 00:10:07.316 "name": null, 00:10:07.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.316 "is_configured": false, 00:10:07.316 "data_offset": 0, 00:10:07.316 "data_size": 65536 00:10:07.316 }, 00:10:07.316 { 00:10:07.316 "name": "BaseBdev2", 00:10:07.316 "uuid": "431284be-4cff-4f1b-840e-3b1bc5864a48", 00:10:07.316 "is_configured": true, 00:10:07.316 "data_offset": 0, 00:10:07.316 "data_size": 65536 00:10:07.316 } 00:10:07.316 ] 00:10:07.316 }' 00:10:07.316 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.316 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.574 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.575 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.575 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.575 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.575 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.575 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.575 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.833 [2024-11-26 06:19:51.718688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.833 [2024-11-26 06:19:51.718866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61047 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61047 ']' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61047 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61047 00:10:07.833 killing process with pid 61047 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61047' 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61047 00:10:07.833 [2024-11-26 06:19:51.931157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.833 06:19:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61047 00:10:07.833 [2024-11-26 06:19:51.950665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.210 06:19:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:09.210 00:10:09.210 real 0m5.388s 00:10:09.210 user 0m7.582s 00:10:09.210 sys 0m0.932s 00:10:09.210 ************************************ 00:10:09.210 END TEST raid_state_function_test 00:10:09.210 ************************************ 00:10:09.210 06:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.210 06:19:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.210 06:19:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:09.210 06:19:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:09.211 06:19:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.211 06:19:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.469 ************************************ 00:10:09.469 START TEST raid_state_function_test_sb 00:10:09.469 ************************************ 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:09.469 Process raid pid: 61306 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:09.469 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61306 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61306' 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61306 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61306 ']' 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.470 06:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.470 [2024-11-26 06:19:53.454512] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:09.470 [2024-11-26 06:19:53.454722] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.728 [2024-11-26 06:19:53.634842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.728 [2024-11-26 06:19:53.788274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.987 [2024-11-26 06:19:54.056918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.987 [2024-11-26 06:19:54.057117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.337 [2024-11-26 06:19:54.303789] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.337 [2024-11-26 06:19:54.303929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.337 [2024-11-26 06:19:54.303964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.337 [2024-11-26 06:19:54.303990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.337 "name": "Existed_Raid", 00:10:10.337 "uuid": "3ee57344-3c42-4fe4-898f-30d9c85fadb0", 00:10:10.337 "strip_size_kb": 64, 00:10:10.337 "state": "configuring", 00:10:10.337 "raid_level": "raid0", 00:10:10.337 "superblock": true, 00:10:10.337 "num_base_bdevs": 2, 00:10:10.337 "num_base_bdevs_discovered": 0, 00:10:10.337 "num_base_bdevs_operational": 2, 00:10:10.337 "base_bdevs_list": [ 00:10:10.337 { 00:10:10.337 "name": "BaseBdev1", 00:10:10.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.337 "is_configured": false, 00:10:10.337 "data_offset": 0, 00:10:10.337 "data_size": 0 00:10:10.337 }, 00:10:10.337 { 00:10:10.337 "name": "BaseBdev2", 00:10:10.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.337 "is_configured": false, 00:10:10.337 "data_offset": 0, 00:10:10.337 "data_size": 0 00:10:10.337 } 00:10:10.337 ] 00:10:10.337 }' 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.337 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.609 [2024-11-26 06:19:54.719100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.609 [2024-11-26 06:19:54.719247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.609 [2024-11-26 06:19:54.731050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.609 [2024-11-26 06:19:54.731247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.609 [2024-11-26 06:19:54.731282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.609 [2024-11-26 06:19:54.731343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.609 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.870 [2024-11-26 06:19:54.790288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.870 BaseBdev1 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.870 [ 00:10:10.870 { 00:10:10.870 "name": "BaseBdev1", 00:10:10.870 "aliases": [ 00:10:10.870 "c93fc7a3-9c2b-490e-86c7-19dd71a3a315" 00:10:10.870 ], 00:10:10.870 "product_name": "Malloc disk", 00:10:10.870 "block_size": 512, 00:10:10.870 "num_blocks": 65536, 00:10:10.870 "uuid": "c93fc7a3-9c2b-490e-86c7-19dd71a3a315", 00:10:10.870 "assigned_rate_limits": { 00:10:10.870 "rw_ios_per_sec": 0, 00:10:10.870 "rw_mbytes_per_sec": 0, 00:10:10.870 "r_mbytes_per_sec": 0, 00:10:10.870 "w_mbytes_per_sec": 0 00:10:10.870 }, 00:10:10.870 "claimed": true, 00:10:10.870 "claim_type": "exclusive_write", 00:10:10.870 "zoned": false, 00:10:10.870 "supported_io_types": { 00:10:10.870 "read": true, 00:10:10.870 "write": true, 00:10:10.870 "unmap": true, 00:10:10.870 "flush": true, 00:10:10.870 "reset": true, 00:10:10.870 "nvme_admin": false, 00:10:10.870 "nvme_io": false, 00:10:10.870 "nvme_io_md": false, 00:10:10.870 "write_zeroes": true, 00:10:10.870 "zcopy": true, 00:10:10.870 "get_zone_info": false, 00:10:10.870 "zone_management": false, 00:10:10.870 "zone_append": false, 00:10:10.870 "compare": false, 00:10:10.870 "compare_and_write": false, 00:10:10.870 "abort": true, 00:10:10.870 "seek_hole": false, 00:10:10.870 "seek_data": false, 00:10:10.870 "copy": true, 00:10:10.870 "nvme_iov_md": false 00:10:10.870 }, 00:10:10.870 "memory_domains": [ 00:10:10.870 { 00:10:10.870 "dma_device_id": "system", 00:10:10.870 "dma_device_type": 1 00:10:10.870 }, 00:10:10.870 { 00:10:10.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.870 "dma_device_type": 2 00:10:10.870 } 00:10:10.870 ], 00:10:10.870 "driver_specific": {} 00:10:10.870 } 00:10:10.870 ] 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.870 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.870 "name": "Existed_Raid", 00:10:10.870 "uuid": "526d5317-aa40-476c-9a87-cd79e7c8e7ec", 00:10:10.870 "strip_size_kb": 64, 00:10:10.870 "state": "configuring", 00:10:10.870 "raid_level": "raid0", 00:10:10.870 "superblock": true, 00:10:10.870 "num_base_bdevs": 2, 00:10:10.870 "num_base_bdevs_discovered": 1, 00:10:10.870 "num_base_bdevs_operational": 2, 00:10:10.870 "base_bdevs_list": [ 00:10:10.870 { 00:10:10.870 "name": "BaseBdev1", 00:10:10.871 "uuid": "c93fc7a3-9c2b-490e-86c7-19dd71a3a315", 00:10:10.871 "is_configured": true, 00:10:10.871 "data_offset": 2048, 00:10:10.871 "data_size": 63488 00:10:10.871 }, 00:10:10.871 { 00:10:10.871 "name": "BaseBdev2", 00:10:10.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.871 "is_configured": false, 00:10:10.871 "data_offset": 0, 00:10:10.871 "data_size": 0 00:10:10.871 } 00:10:10.871 ] 00:10:10.871 }' 00:10:10.871 06:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.871 06:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.440 [2024-11-26 06:19:55.269673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.440 [2024-11-26 06:19:55.269875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.440 [2024-11-26 06:19:55.281761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.440 [2024-11-26 06:19:55.284347] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.440 [2024-11-26 06:19:55.284447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.440 "name": "Existed_Raid", 00:10:11.440 "uuid": "765573eb-50c0-48b0-a7a8-7776b04d0f8b", 00:10:11.440 "strip_size_kb": 64, 00:10:11.440 "state": "configuring", 00:10:11.440 "raid_level": "raid0", 00:10:11.440 "superblock": true, 00:10:11.440 "num_base_bdevs": 2, 00:10:11.440 "num_base_bdevs_discovered": 1, 00:10:11.440 "num_base_bdevs_operational": 2, 00:10:11.440 "base_bdevs_list": [ 00:10:11.440 { 00:10:11.440 "name": "BaseBdev1", 00:10:11.440 "uuid": "c93fc7a3-9c2b-490e-86c7-19dd71a3a315", 00:10:11.440 "is_configured": true, 00:10:11.440 "data_offset": 2048, 00:10:11.440 "data_size": 63488 00:10:11.440 }, 00:10:11.440 { 00:10:11.440 "name": "BaseBdev2", 00:10:11.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.440 "is_configured": false, 00:10:11.440 "data_offset": 0, 00:10:11.440 "data_size": 0 00:10:11.440 } 00:10:11.440 ] 00:10:11.440 }' 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.440 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.700 [2024-11-26 06:19:55.794948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.700 [2024-11-26 06:19:55.795337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.700 [2024-11-26 06:19:55.795355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:11.700 BaseBdev2 00:10:11.700 [2024-11-26 06:19:55.795701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:11.700 [2024-11-26 06:19:55.795892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.700 [2024-11-26 06:19:55.795907] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:11.700 [2024-11-26 06:19:55.796077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.700 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.700 [ 00:10:11.700 { 00:10:11.700 "name": "BaseBdev2", 00:10:11.700 "aliases": [ 00:10:11.700 "ce06f982-8d49-4582-9e08-30f18be1a98c" 00:10:11.700 ], 00:10:11.700 "product_name": "Malloc disk", 00:10:11.700 "block_size": 512, 00:10:11.700 "num_blocks": 65536, 00:10:11.700 "uuid": "ce06f982-8d49-4582-9e08-30f18be1a98c", 00:10:11.700 "assigned_rate_limits": { 00:10:11.700 "rw_ios_per_sec": 0, 00:10:11.700 "rw_mbytes_per_sec": 0, 00:10:11.700 "r_mbytes_per_sec": 0, 00:10:11.700 "w_mbytes_per_sec": 0 00:10:11.700 }, 00:10:11.700 "claimed": true, 00:10:11.700 "claim_type": "exclusive_write", 00:10:11.700 "zoned": false, 00:10:11.700 "supported_io_types": { 00:10:11.700 "read": true, 00:10:11.700 "write": true, 00:10:11.700 "unmap": true, 00:10:11.700 "flush": true, 00:10:11.700 "reset": true, 00:10:11.700 "nvme_admin": false, 00:10:11.700 "nvme_io": false, 00:10:11.700 "nvme_io_md": false, 00:10:11.700 "write_zeroes": true, 00:10:11.700 "zcopy": true, 00:10:11.700 "get_zone_info": false, 00:10:11.700 "zone_management": false, 00:10:11.700 "zone_append": false, 00:10:11.959 "compare": false, 00:10:11.959 "compare_and_write": false, 00:10:11.959 "abort": true, 00:10:11.959 "seek_hole": false, 00:10:11.959 "seek_data": false, 00:10:11.959 "copy": true, 00:10:11.959 "nvme_iov_md": false 00:10:11.959 }, 00:10:11.959 "memory_domains": [ 00:10:11.959 { 00:10:11.959 "dma_device_id": "system", 00:10:11.959 "dma_device_type": 1 00:10:11.959 }, 00:10:11.959 { 00:10:11.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.959 "dma_device_type": 2 00:10:11.959 } 00:10:11.959 ], 00:10:11.959 "driver_specific": {} 00:10:11.959 } 00:10:11.959 ] 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.959 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.959 "name": "Existed_Raid", 00:10:11.959 "uuid": "765573eb-50c0-48b0-a7a8-7776b04d0f8b", 00:10:11.959 "strip_size_kb": 64, 00:10:11.959 "state": "online", 00:10:11.959 "raid_level": "raid0", 00:10:11.959 "superblock": true, 00:10:11.959 "num_base_bdevs": 2, 00:10:11.959 "num_base_bdevs_discovered": 2, 00:10:11.959 "num_base_bdevs_operational": 2, 00:10:11.959 "base_bdevs_list": [ 00:10:11.959 { 00:10:11.959 "name": "BaseBdev1", 00:10:11.959 "uuid": "c93fc7a3-9c2b-490e-86c7-19dd71a3a315", 00:10:11.959 "is_configured": true, 00:10:11.960 "data_offset": 2048, 00:10:11.960 "data_size": 63488 00:10:11.960 }, 00:10:11.960 { 00:10:11.960 "name": "BaseBdev2", 00:10:11.960 "uuid": "ce06f982-8d49-4582-9e08-30f18be1a98c", 00:10:11.960 "is_configured": true, 00:10:11.960 "data_offset": 2048, 00:10:11.960 "data_size": 63488 00:10:11.960 } 00:10:11.960 ] 00:10:11.960 }' 00:10:11.960 06:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.960 06:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.219 [2024-11-26 06:19:56.310597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.219 "name": "Existed_Raid", 00:10:12.219 "aliases": [ 00:10:12.219 "765573eb-50c0-48b0-a7a8-7776b04d0f8b" 00:10:12.219 ], 00:10:12.219 "product_name": "Raid Volume", 00:10:12.219 "block_size": 512, 00:10:12.219 "num_blocks": 126976, 00:10:12.219 "uuid": "765573eb-50c0-48b0-a7a8-7776b04d0f8b", 00:10:12.219 "assigned_rate_limits": { 00:10:12.219 "rw_ios_per_sec": 0, 00:10:12.219 "rw_mbytes_per_sec": 0, 00:10:12.219 "r_mbytes_per_sec": 0, 00:10:12.219 "w_mbytes_per_sec": 0 00:10:12.219 }, 00:10:12.219 "claimed": false, 00:10:12.219 "zoned": false, 00:10:12.219 "supported_io_types": { 00:10:12.219 "read": true, 00:10:12.219 "write": true, 00:10:12.219 "unmap": true, 00:10:12.219 "flush": true, 00:10:12.219 "reset": true, 00:10:12.219 "nvme_admin": false, 00:10:12.219 "nvme_io": false, 00:10:12.219 "nvme_io_md": false, 00:10:12.219 "write_zeroes": true, 00:10:12.219 "zcopy": false, 00:10:12.219 "get_zone_info": false, 00:10:12.219 "zone_management": false, 00:10:12.219 "zone_append": false, 00:10:12.219 "compare": false, 00:10:12.219 "compare_and_write": false, 00:10:12.219 "abort": false, 00:10:12.219 "seek_hole": false, 00:10:12.219 "seek_data": false, 00:10:12.219 "copy": false, 00:10:12.219 "nvme_iov_md": false 00:10:12.219 }, 00:10:12.219 "memory_domains": [ 00:10:12.219 { 00:10:12.219 "dma_device_id": "system", 00:10:12.219 "dma_device_type": 1 00:10:12.219 }, 00:10:12.219 { 00:10:12.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.219 "dma_device_type": 2 00:10:12.219 }, 00:10:12.219 { 00:10:12.219 "dma_device_id": "system", 00:10:12.219 "dma_device_type": 1 00:10:12.219 }, 00:10:12.219 { 00:10:12.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.219 "dma_device_type": 2 00:10:12.219 } 00:10:12.219 ], 00:10:12.219 "driver_specific": { 00:10:12.219 "raid": { 00:10:12.219 "uuid": "765573eb-50c0-48b0-a7a8-7776b04d0f8b", 00:10:12.219 "strip_size_kb": 64, 00:10:12.219 "state": "online", 00:10:12.219 "raid_level": "raid0", 00:10:12.219 "superblock": true, 00:10:12.219 "num_base_bdevs": 2, 00:10:12.219 "num_base_bdevs_discovered": 2, 00:10:12.219 "num_base_bdevs_operational": 2, 00:10:12.219 "base_bdevs_list": [ 00:10:12.219 { 00:10:12.219 "name": "BaseBdev1", 00:10:12.219 "uuid": "c93fc7a3-9c2b-490e-86c7-19dd71a3a315", 00:10:12.219 "is_configured": true, 00:10:12.219 "data_offset": 2048, 00:10:12.219 "data_size": 63488 00:10:12.219 }, 00:10:12.219 { 00:10:12.219 "name": "BaseBdev2", 00:10:12.219 "uuid": "ce06f982-8d49-4582-9e08-30f18be1a98c", 00:10:12.219 "is_configured": true, 00:10:12.219 "data_offset": 2048, 00:10:12.219 "data_size": 63488 00:10:12.219 } 00:10:12.219 ] 00:10:12.219 } 00:10:12.219 } 00:10:12.219 }' 00:10:12.219 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:12.480 BaseBdev2' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.480 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.480 [2024-11-26 06:19:56.538018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.480 [2024-11-26 06:19:56.538204] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.480 [2024-11-26 06:19:56.538309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.739 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.739 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:12.739 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:12.739 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.740 "name": "Existed_Raid", 00:10:12.740 "uuid": "765573eb-50c0-48b0-a7a8-7776b04d0f8b", 00:10:12.740 "strip_size_kb": 64, 00:10:12.740 "state": "offline", 00:10:12.740 "raid_level": "raid0", 00:10:12.740 "superblock": true, 00:10:12.740 "num_base_bdevs": 2, 00:10:12.740 "num_base_bdevs_discovered": 1, 00:10:12.740 "num_base_bdevs_operational": 1, 00:10:12.740 "base_bdevs_list": [ 00:10:12.740 { 00:10:12.740 "name": null, 00:10:12.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.740 "is_configured": false, 00:10:12.740 "data_offset": 0, 00:10:12.740 "data_size": 63488 00:10:12.740 }, 00:10:12.740 { 00:10:12.740 "name": "BaseBdev2", 00:10:12.740 "uuid": "ce06f982-8d49-4582-9e08-30f18be1a98c", 00:10:12.740 "is_configured": true, 00:10:12.740 "data_offset": 2048, 00:10:12.740 "data_size": 63488 00:10:12.740 } 00:10:12.740 ] 00:10:12.740 }' 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.740 06:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.000 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:13.000 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.000 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.000 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:13.000 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.000 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.260 [2024-11-26 06:19:57.175304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.260 [2024-11-26 06:19:57.175506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61306 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61306 ']' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61306 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61306 00:10:13.260 killing process with pid 61306 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61306' 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61306 00:10:13.260 [2024-11-26 06:19:57.390924] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.260 06:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61306 00:10:13.549 [2024-11-26 06:19:57.410361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.925 06:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:14.925 00:10:14.925 real 0m5.386s 00:10:14.925 user 0m7.545s 00:10:14.925 sys 0m0.967s 00:10:14.925 06:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.925 ************************************ 00:10:14.925 END TEST raid_state_function_test_sb 00:10:14.925 ************************************ 00:10:14.925 06:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.925 06:19:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:14.925 06:19:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.925 06:19:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.925 06:19:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.925 ************************************ 00:10:14.925 START TEST raid_superblock_test 00:10:14.925 ************************************ 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61558 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61558 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61558 ']' 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.925 06:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.925 [2024-11-26 06:19:58.894090] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:14.925 [2024-11-26 06:19:58.894740] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61558 ] 00:10:14.925 [2024-11-26 06:19:59.050907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.185 [2024-11-26 06:19:59.201705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.444 [2024-11-26 06:19:59.453496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.444 [2024-11-26 06:19:59.453581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.012 malloc1 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.012 [2024-11-26 06:19:59.900557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.012 [2024-11-26 06:19:59.900765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.012 [2024-11-26 06:19:59.900840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.012 [2024-11-26 06:19:59.900879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.012 [2024-11-26 06:19:59.903567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.012 [2024-11-26 06:19:59.903667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.012 pt1 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.012 malloc2 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.012 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.012 [2024-11-26 06:19:59.969565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.012 [2024-11-26 06:19:59.969741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.013 [2024-11-26 06:19:59.969795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.013 [2024-11-26 06:19:59.969836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.013 [2024-11-26 06:19:59.972635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.013 [2024-11-26 06:19:59.972714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.013 pt2 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.013 [2024-11-26 06:19:59.981648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.013 [2024-11-26 06:19:59.984040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.013 [2024-11-26 06:19:59.984351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.013 [2024-11-26 06:19:59.984410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:16.013 [2024-11-26 06:19:59.984809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:16.013 [2024-11-26 06:19:59.985067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.013 [2024-11-26 06:19:59.985118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.013 [2024-11-26 06:19:59.985379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.013 06:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.013 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.013 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.013 "name": "raid_bdev1", 00:10:16.013 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:16.013 "strip_size_kb": 64, 00:10:16.013 "state": "online", 00:10:16.013 "raid_level": "raid0", 00:10:16.013 "superblock": true, 00:10:16.013 "num_base_bdevs": 2, 00:10:16.013 "num_base_bdevs_discovered": 2, 00:10:16.013 "num_base_bdevs_operational": 2, 00:10:16.013 "base_bdevs_list": [ 00:10:16.013 { 00:10:16.013 "name": "pt1", 00:10:16.013 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.013 "is_configured": true, 00:10:16.013 "data_offset": 2048, 00:10:16.013 "data_size": 63488 00:10:16.013 }, 00:10:16.013 { 00:10:16.013 "name": "pt2", 00:10:16.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.013 "is_configured": true, 00:10:16.013 "data_offset": 2048, 00:10:16.013 "data_size": 63488 00:10:16.013 } 00:10:16.013 ] 00:10:16.013 }' 00:10:16.013 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.013 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.580 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.581 [2024-11-26 06:20:00.461179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.581 "name": "raid_bdev1", 00:10:16.581 "aliases": [ 00:10:16.581 "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7" 00:10:16.581 ], 00:10:16.581 "product_name": "Raid Volume", 00:10:16.581 "block_size": 512, 00:10:16.581 "num_blocks": 126976, 00:10:16.581 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:16.581 "assigned_rate_limits": { 00:10:16.581 "rw_ios_per_sec": 0, 00:10:16.581 "rw_mbytes_per_sec": 0, 00:10:16.581 "r_mbytes_per_sec": 0, 00:10:16.581 "w_mbytes_per_sec": 0 00:10:16.581 }, 00:10:16.581 "claimed": false, 00:10:16.581 "zoned": false, 00:10:16.581 "supported_io_types": { 00:10:16.581 "read": true, 00:10:16.581 "write": true, 00:10:16.581 "unmap": true, 00:10:16.581 "flush": true, 00:10:16.581 "reset": true, 00:10:16.581 "nvme_admin": false, 00:10:16.581 "nvme_io": false, 00:10:16.581 "nvme_io_md": false, 00:10:16.581 "write_zeroes": true, 00:10:16.581 "zcopy": false, 00:10:16.581 "get_zone_info": false, 00:10:16.581 "zone_management": false, 00:10:16.581 "zone_append": false, 00:10:16.581 "compare": false, 00:10:16.581 "compare_and_write": false, 00:10:16.581 "abort": false, 00:10:16.581 "seek_hole": false, 00:10:16.581 "seek_data": false, 00:10:16.581 "copy": false, 00:10:16.581 "nvme_iov_md": false 00:10:16.581 }, 00:10:16.581 "memory_domains": [ 00:10:16.581 { 00:10:16.581 "dma_device_id": "system", 00:10:16.581 "dma_device_type": 1 00:10:16.581 }, 00:10:16.581 { 00:10:16.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.581 "dma_device_type": 2 00:10:16.581 }, 00:10:16.581 { 00:10:16.581 "dma_device_id": "system", 00:10:16.581 "dma_device_type": 1 00:10:16.581 }, 00:10:16.581 { 00:10:16.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.581 "dma_device_type": 2 00:10:16.581 } 00:10:16.581 ], 00:10:16.581 "driver_specific": { 00:10:16.581 "raid": { 00:10:16.581 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:16.581 "strip_size_kb": 64, 00:10:16.581 "state": "online", 00:10:16.581 "raid_level": "raid0", 00:10:16.581 "superblock": true, 00:10:16.581 "num_base_bdevs": 2, 00:10:16.581 "num_base_bdevs_discovered": 2, 00:10:16.581 "num_base_bdevs_operational": 2, 00:10:16.581 "base_bdevs_list": [ 00:10:16.581 { 00:10:16.581 "name": "pt1", 00:10:16.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.581 "is_configured": true, 00:10:16.581 "data_offset": 2048, 00:10:16.581 "data_size": 63488 00:10:16.581 }, 00:10:16.581 { 00:10:16.581 "name": "pt2", 00:10:16.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.581 "is_configured": true, 00:10:16.581 "data_offset": 2048, 00:10:16.581 "data_size": 63488 00:10:16.581 } 00:10:16.581 ] 00:10:16.581 } 00:10:16.581 } 00:10:16.581 }' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:16.581 pt2' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:16.581 [2024-11-26 06:20:00.680658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.581 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7262ac20-02a1-42be-b9c1-eb1b5ab20cb7 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7262ac20-02a1-42be-b9c1-eb1b5ab20cb7 ']' 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.841 [2024-11-26 06:20:00.732307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.841 [2024-11-26 06:20:00.732414] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.841 [2024-11-26 06:20:00.732577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.841 [2024-11-26 06:20:00.732677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.841 [2024-11-26 06:20:00.732730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:16.841 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.842 [2024-11-26 06:20:00.868153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:16.842 [2024-11-26 06:20:00.870588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:16.842 [2024-11-26 06:20:00.870735] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:16.842 [2024-11-26 06:20:00.870852] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:16.842 [2024-11-26 06:20:00.870998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:16.842 [2024-11-26 06:20:00.871060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:16.842 request: 00:10:16.842 { 00:10:16.842 "name": "raid_bdev1", 00:10:16.842 "raid_level": "raid0", 00:10:16.842 "base_bdevs": [ 00:10:16.842 "malloc1", 00:10:16.842 "malloc2" 00:10:16.842 ], 00:10:16.842 "strip_size_kb": 64, 00:10:16.842 "superblock": false, 00:10:16.842 "method": "bdev_raid_create", 00:10:16.842 "req_id": 1 00:10:16.842 } 00:10:16.842 Got JSON-RPC error response 00:10:16.842 response: 00:10:16.842 { 00:10:16.842 "code": -17, 00:10:16.842 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:16.842 } 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.842 [2024-11-26 06:20:00.935999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.842 [2024-11-26 06:20:00.936223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.842 [2024-11-26 06:20:00.936276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.842 [2024-11-26 06:20:00.936378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.842 [2024-11-26 06:20:00.939333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.842 [2024-11-26 06:20:00.939433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.842 [2024-11-26 06:20:00.939613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:16.842 [2024-11-26 06:20:00.939767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.842 pt1 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.842 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.104 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.104 "name": "raid_bdev1", 00:10:17.104 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:17.104 "strip_size_kb": 64, 00:10:17.104 "state": "configuring", 00:10:17.104 "raid_level": "raid0", 00:10:17.104 "superblock": true, 00:10:17.104 "num_base_bdevs": 2, 00:10:17.104 "num_base_bdevs_discovered": 1, 00:10:17.104 "num_base_bdevs_operational": 2, 00:10:17.104 "base_bdevs_list": [ 00:10:17.104 { 00:10:17.104 "name": "pt1", 00:10:17.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.104 "is_configured": true, 00:10:17.104 "data_offset": 2048, 00:10:17.104 "data_size": 63488 00:10:17.104 }, 00:10:17.104 { 00:10:17.104 "name": null, 00:10:17.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.104 "is_configured": false, 00:10:17.104 "data_offset": 2048, 00:10:17.104 "data_size": 63488 00:10:17.104 } 00:10:17.104 ] 00:10:17.104 }' 00:10:17.104 06:20:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.104 06:20:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 [2024-11-26 06:20:01.395440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.362 [2024-11-26 06:20:01.395639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.362 [2024-11-26 06:20:01.395689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:17.362 [2024-11-26 06:20:01.395757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.362 [2024-11-26 06:20:01.396431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.362 [2024-11-26 06:20:01.396498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.362 [2024-11-26 06:20:01.396648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:17.362 [2024-11-26 06:20:01.396707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.362 [2024-11-26 06:20:01.396892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:17.362 [2024-11-26 06:20:01.396937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:17.362 [2024-11-26 06:20:01.397261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:17.362 [2024-11-26 06:20:01.397482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:17.362 [2024-11-26 06:20:01.397526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:17.362 [2024-11-26 06:20:01.397758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.362 pt2 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.362 "name": "raid_bdev1", 00:10:17.362 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:17.362 "strip_size_kb": 64, 00:10:17.362 "state": "online", 00:10:17.362 "raid_level": "raid0", 00:10:17.362 "superblock": true, 00:10:17.362 "num_base_bdevs": 2, 00:10:17.362 "num_base_bdevs_discovered": 2, 00:10:17.362 "num_base_bdevs_operational": 2, 00:10:17.362 "base_bdevs_list": [ 00:10:17.362 { 00:10:17.362 "name": "pt1", 00:10:17.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.362 "is_configured": true, 00:10:17.362 "data_offset": 2048, 00:10:17.362 "data_size": 63488 00:10:17.362 }, 00:10:17.362 { 00:10:17.362 "name": "pt2", 00:10:17.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.362 "is_configured": true, 00:10:17.362 "data_offset": 2048, 00:10:17.362 "data_size": 63488 00:10:17.362 } 00:10:17.362 ] 00:10:17.362 }' 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.362 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.931 [2024-11-26 06:20:01.890922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.931 "name": "raid_bdev1", 00:10:17.931 "aliases": [ 00:10:17.931 "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7" 00:10:17.931 ], 00:10:17.931 "product_name": "Raid Volume", 00:10:17.931 "block_size": 512, 00:10:17.931 "num_blocks": 126976, 00:10:17.931 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:17.931 "assigned_rate_limits": { 00:10:17.931 "rw_ios_per_sec": 0, 00:10:17.931 "rw_mbytes_per_sec": 0, 00:10:17.931 "r_mbytes_per_sec": 0, 00:10:17.931 "w_mbytes_per_sec": 0 00:10:17.931 }, 00:10:17.931 "claimed": false, 00:10:17.931 "zoned": false, 00:10:17.931 "supported_io_types": { 00:10:17.931 "read": true, 00:10:17.931 "write": true, 00:10:17.931 "unmap": true, 00:10:17.931 "flush": true, 00:10:17.931 "reset": true, 00:10:17.931 "nvme_admin": false, 00:10:17.931 "nvme_io": false, 00:10:17.931 "nvme_io_md": false, 00:10:17.931 "write_zeroes": true, 00:10:17.931 "zcopy": false, 00:10:17.931 "get_zone_info": false, 00:10:17.931 "zone_management": false, 00:10:17.931 "zone_append": false, 00:10:17.931 "compare": false, 00:10:17.931 "compare_and_write": false, 00:10:17.931 "abort": false, 00:10:17.931 "seek_hole": false, 00:10:17.931 "seek_data": false, 00:10:17.931 "copy": false, 00:10:17.931 "nvme_iov_md": false 00:10:17.931 }, 00:10:17.931 "memory_domains": [ 00:10:17.931 { 00:10:17.931 "dma_device_id": "system", 00:10:17.931 "dma_device_type": 1 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.931 "dma_device_type": 2 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "dma_device_id": "system", 00:10:17.931 "dma_device_type": 1 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.931 "dma_device_type": 2 00:10:17.931 } 00:10:17.931 ], 00:10:17.931 "driver_specific": { 00:10:17.931 "raid": { 00:10:17.931 "uuid": "7262ac20-02a1-42be-b9c1-eb1b5ab20cb7", 00:10:17.931 "strip_size_kb": 64, 00:10:17.931 "state": "online", 00:10:17.931 "raid_level": "raid0", 00:10:17.931 "superblock": true, 00:10:17.931 "num_base_bdevs": 2, 00:10:17.931 "num_base_bdevs_discovered": 2, 00:10:17.931 "num_base_bdevs_operational": 2, 00:10:17.931 "base_bdevs_list": [ 00:10:17.931 { 00:10:17.931 "name": "pt1", 00:10:17.931 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.931 "is_configured": true, 00:10:17.931 "data_offset": 2048, 00:10:17.931 "data_size": 63488 00:10:17.931 }, 00:10:17.931 { 00:10:17.931 "name": "pt2", 00:10:17.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.931 "is_configured": true, 00:10:17.931 "data_offset": 2048, 00:10:17.931 "data_size": 63488 00:10:17.931 } 00:10:17.931 ] 00:10:17.931 } 00:10:17.931 } 00:10:17.931 }' 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.931 pt2' 00:10:17.931 06:20:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.931 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.190 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.191 [2024-11-26 06:20:02.130607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7262ac20-02a1-42be-b9c1-eb1b5ab20cb7 '!=' 7262ac20-02a1-42be-b9c1-eb1b5ab20cb7 ']' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61558 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61558 ']' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61558 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61558 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61558' 00:10:18.191 killing process with pid 61558 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61558 00:10:18.191 [2024-11-26 06:20:02.218952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.191 06:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61558 00:10:18.191 [2024-11-26 06:20:02.219255] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.191 [2024-11-26 06:20:02.219401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.191 [2024-11-26 06:20:02.219476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:18.449 [2024-11-26 06:20:02.451913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.827 06:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:19.827 00:10:19.827 real 0m4.954s 00:10:19.827 user 0m6.857s 00:10:19.827 sys 0m0.878s 00:10:19.827 06:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.827 06:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.827 ************************************ 00:10:19.827 END TEST raid_superblock_test 00:10:19.827 ************************************ 00:10:19.827 06:20:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:19.827 06:20:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:19.827 06:20:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.827 06:20:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.827 ************************************ 00:10:19.827 START TEST raid_read_error_test 00:10:19.827 ************************************ 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rf1LwZMuMv 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61770 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61770 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61770 ']' 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.827 06:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.827 [2024-11-26 06:20:03.929185] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:19.827 [2024-11-26 06:20:03.929324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61770 ] 00:10:20.086 [2024-11-26 06:20:04.111178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.346 [2024-11-26 06:20:04.239816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.346 [2024-11-26 06:20:04.462004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.346 [2024-11-26 06:20:04.462189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 BaseBdev1_malloc 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 true 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 [2024-11-26 06:20:04.860619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:20.955 [2024-11-26 06:20:04.860830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.955 [2024-11-26 06:20:04.860907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:20.955 [2024-11-26 06:20:04.860959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.955 [2024-11-26 06:20:04.863600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.955 [2024-11-26 06:20:04.863762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:20.955 BaseBdev1 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 BaseBdev2_malloc 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 true 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 [2024-11-26 06:20:04.933744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.955 [2024-11-26 06:20:04.933888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.955 [2024-11-26 06:20:04.933934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:20.955 [2024-11-26 06:20:04.933974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.955 [2024-11-26 06:20:04.936498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.955 [2024-11-26 06:20:04.936599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.955 BaseBdev2 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 [2024-11-26 06:20:04.945815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.955 [2024-11-26 06:20:04.948194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.955 [2024-11-26 06:20:04.948568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.955 [2024-11-26 06:20:04.948639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:20.955 [2024-11-26 06:20:04.949049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:20.955 [2024-11-26 06:20:04.949412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.955 [2024-11-26 06:20:04.949473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:20.955 [2024-11-26 06:20:04.949836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.955 06:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.955 06:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.955 "name": "raid_bdev1", 00:10:20.955 "uuid": "83975d13-ae47-47ce-8f1a-558d9da47ccc", 00:10:20.955 "strip_size_kb": 64, 00:10:20.955 "state": "online", 00:10:20.955 "raid_level": "raid0", 00:10:20.955 "superblock": true, 00:10:20.955 "num_base_bdevs": 2, 00:10:20.955 "num_base_bdevs_discovered": 2, 00:10:20.955 "num_base_bdevs_operational": 2, 00:10:20.955 "base_bdevs_list": [ 00:10:20.955 { 00:10:20.955 "name": "BaseBdev1", 00:10:20.955 "uuid": "5c0350b9-beaa-51ac-8142-8a33d87b3ea6", 00:10:20.955 "is_configured": true, 00:10:20.955 "data_offset": 2048, 00:10:20.955 "data_size": 63488 00:10:20.955 }, 00:10:20.955 { 00:10:20.955 "name": "BaseBdev2", 00:10:20.955 "uuid": "0d7122ac-b0b9-5a92-9c52-c2e04996b96a", 00:10:20.955 "is_configured": true, 00:10:20.955 "data_offset": 2048, 00:10:20.955 "data_size": 63488 00:10:20.955 } 00:10:20.955 ] 00:10:20.955 }' 00:10:20.955 06:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.955 06:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.524 06:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.524 06:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.524 [2024-11-26 06:20:05.518319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.464 "name": "raid_bdev1", 00:10:22.464 "uuid": "83975d13-ae47-47ce-8f1a-558d9da47ccc", 00:10:22.464 "strip_size_kb": 64, 00:10:22.464 "state": "online", 00:10:22.464 "raid_level": "raid0", 00:10:22.464 "superblock": true, 00:10:22.464 "num_base_bdevs": 2, 00:10:22.464 "num_base_bdevs_discovered": 2, 00:10:22.464 "num_base_bdevs_operational": 2, 00:10:22.464 "base_bdevs_list": [ 00:10:22.464 { 00:10:22.464 "name": "BaseBdev1", 00:10:22.464 "uuid": "5c0350b9-beaa-51ac-8142-8a33d87b3ea6", 00:10:22.464 "is_configured": true, 00:10:22.464 "data_offset": 2048, 00:10:22.464 "data_size": 63488 00:10:22.464 }, 00:10:22.464 { 00:10:22.464 "name": "BaseBdev2", 00:10:22.464 "uuid": "0d7122ac-b0b9-5a92-9c52-c2e04996b96a", 00:10:22.464 "is_configured": true, 00:10:22.464 "data_offset": 2048, 00:10:22.464 "data_size": 63488 00:10:22.464 } 00:10:22.464 ] 00:10:22.464 }' 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.464 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.725 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:22.725 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.725 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.725 [2024-11-26 06:20:06.847291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:22.725 [2024-11-26 06:20:06.847535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:22.725 [2024-11-26 06:20:06.856325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.725 { 00:10:22.725 "results": [ 00:10:22.725 { 00:10:22.725 "job": "raid_bdev1", 00:10:22.725 "core_mask": "0x1", 00:10:22.725 "workload": "randrw", 00:10:22.725 "percentage": 50, 00:10:22.725 "status": "finished", 00:10:22.725 "queue_depth": 1, 00:10:22.726 "io_size": 131072, 00:10:22.726 "runtime": 1.330116, 00:10:22.726 "iops": 13910.816800940669, 00:10:22.726 "mibps": 1738.8521001175836, 00:10:22.726 "io_failed": 1, 00:10:22.726 "io_timeout": 0, 00:10:22.726 "avg_latency_us": 99.90142671854734, 00:10:22.726 "min_latency_us": 27.053275109170304, 00:10:22.726 "max_latency_us": 1681.3275109170306 00:10:22.726 } 00:10:22.726 ], 00:10:22.726 "core_count": 1 00:10:22.726 } 00:10:22.726 [2024-11-26 06:20:06.856602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.726 [2024-11-26 06:20:06.856694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.726 [2024-11-26 06:20:06.856741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61770 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61770 ']' 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61770 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61770 00:10:22.986 killing process with pid 61770 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61770' 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61770 00:10:22.986 [2024-11-26 06:20:06.901809] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.986 06:20:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61770 00:10:22.986 [2024-11-26 06:20:07.054623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rf1LwZMuMv 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:24.368 00:10:24.368 real 0m4.539s 00:10:24.368 user 0m5.392s 00:10:24.368 sys 0m0.562s 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.368 06:20:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.368 ************************************ 00:10:24.368 END TEST raid_read_error_test 00:10:24.368 ************************************ 00:10:24.368 06:20:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:24.368 06:20:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.368 06:20:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.368 06:20:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.368 ************************************ 00:10:24.368 START TEST raid_write_error_test 00:10:24.368 ************************************ 00:10:24.368 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:24.368 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:24.368 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1ohsvSWUeB 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61915 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61915 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61915 ']' 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.369 06:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.630 [2024-11-26 06:20:08.543580] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:24.630 [2024-11-26 06:20:08.543841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61915 ] 00:10:24.630 [2024-11-26 06:20:08.709032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.890 [2024-11-26 06:20:08.851931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.149 [2024-11-26 06:20:09.072580] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.149 [2024-11-26 06:20:09.072732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.408 BaseBdev1_malloc 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.408 true 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.408 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.667 [2024-11-26 06:20:09.540002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.667 [2024-11-26 06:20:09.540229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.667 [2024-11-26 06:20:09.540270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.667 [2024-11-26 06:20:09.540291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.667 [2024-11-26 06:20:09.543116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.667 [2024-11-26 06:20:09.543180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.667 BaseBdev1 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.667 BaseBdev2_malloc 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.667 true 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.667 [2024-11-26 06:20:09.615118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.667 [2024-11-26 06:20:09.615268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.667 [2024-11-26 06:20:09.615316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.667 [2024-11-26 06:20:09.615358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.667 [2024-11-26 06:20:09.618148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.667 [2024-11-26 06:20:09.618245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.667 BaseBdev2 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.667 [2024-11-26 06:20:09.627206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.667 [2024-11-26 06:20:09.629435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.667 [2024-11-26 06:20:09.629651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.667 [2024-11-26 06:20:09.629669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:25.667 [2024-11-26 06:20:09.629951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:25.667 [2024-11-26 06:20:09.630180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.667 [2024-11-26 06:20:09.630195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:25.667 [2024-11-26 06:20:09.630422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.667 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.668 "name": "raid_bdev1", 00:10:25.668 "uuid": "8b6406c5-545a-4b68-b1f3-7c2fc53e9022", 00:10:25.668 "strip_size_kb": 64, 00:10:25.668 "state": "online", 00:10:25.668 "raid_level": "raid0", 00:10:25.668 "superblock": true, 00:10:25.668 "num_base_bdevs": 2, 00:10:25.668 "num_base_bdevs_discovered": 2, 00:10:25.668 "num_base_bdevs_operational": 2, 00:10:25.668 "base_bdevs_list": [ 00:10:25.668 { 00:10:25.668 "name": "BaseBdev1", 00:10:25.668 "uuid": "342a64e7-5513-568e-bc00-4f8a3e4eeb1a", 00:10:25.668 "is_configured": true, 00:10:25.668 "data_offset": 2048, 00:10:25.668 "data_size": 63488 00:10:25.668 }, 00:10:25.668 { 00:10:25.668 "name": "BaseBdev2", 00:10:25.668 "uuid": "661df175-cba6-53f2-a052-f3cbac21b2bd", 00:10:25.668 "is_configured": true, 00:10:25.668 "data_offset": 2048, 00:10:25.668 "data_size": 63488 00:10:25.668 } 00:10:25.668 ] 00:10:25.668 }' 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.668 06:20:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.235 06:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:26.235 06:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.235 [2024-11-26 06:20:10.187600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.172 "name": "raid_bdev1", 00:10:27.172 "uuid": "8b6406c5-545a-4b68-b1f3-7c2fc53e9022", 00:10:27.172 "strip_size_kb": 64, 00:10:27.172 "state": "online", 00:10:27.172 "raid_level": "raid0", 00:10:27.172 "superblock": true, 00:10:27.172 "num_base_bdevs": 2, 00:10:27.172 "num_base_bdevs_discovered": 2, 00:10:27.172 "num_base_bdevs_operational": 2, 00:10:27.172 "base_bdevs_list": [ 00:10:27.172 { 00:10:27.172 "name": "BaseBdev1", 00:10:27.172 "uuid": "342a64e7-5513-568e-bc00-4f8a3e4eeb1a", 00:10:27.172 "is_configured": true, 00:10:27.172 "data_offset": 2048, 00:10:27.172 "data_size": 63488 00:10:27.172 }, 00:10:27.172 { 00:10:27.172 "name": "BaseBdev2", 00:10:27.172 "uuid": "661df175-cba6-53f2-a052-f3cbac21b2bd", 00:10:27.172 "is_configured": true, 00:10:27.172 "data_offset": 2048, 00:10:27.172 "data_size": 63488 00:10:27.172 } 00:10:27.172 ] 00:10:27.172 }' 00:10:27.172 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.173 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.744 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.744 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.744 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.744 [2024-11-26 06:20:11.605686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.744 [2024-11-26 06:20:11.605739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.744 [2024-11-26 06:20:11.608985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.744 [2024-11-26 06:20:11.609173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.744 [2024-11-26 06:20:11.609223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.744 [2024-11-26 06:20:11.609238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.744 { 00:10:27.744 "results": [ 00:10:27.744 { 00:10:27.744 "job": "raid_bdev1", 00:10:27.744 "core_mask": "0x1", 00:10:27.744 "workload": "randrw", 00:10:27.744 "percentage": 50, 00:10:27.744 "status": "finished", 00:10:27.744 "queue_depth": 1, 00:10:27.744 "io_size": 131072, 00:10:27.744 "runtime": 1.418291, 00:10:27.745 "iops": 13239.173061099591, 00:10:27.745 "mibps": 1654.8966326374489, 00:10:27.745 "io_failed": 1, 00:10:27.745 "io_timeout": 0, 00:10:27.745 "avg_latency_us": 105.01038723657388, 00:10:27.745 "min_latency_us": 29.289082969432314, 00:10:27.745 "max_latency_us": 1810.1100436681222 00:10:27.745 } 00:10:27.745 ], 00:10:27.745 "core_count": 1 00:10:27.745 } 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61915 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61915 ']' 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61915 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61915 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61915' 00:10:27.745 killing process with pid 61915 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61915 00:10:27.745 [2024-11-26 06:20:11.657858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.745 06:20:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61915 00:10:27.745 [2024-11-26 06:20:11.817298] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1ohsvSWUeB 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.122 ************************************ 00:10:29.122 END TEST raid_write_error_test 00:10:29.122 ************************************ 00:10:29.122 06:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:29.123 00:10:29.123 real 0m4.726s 00:10:29.123 user 0m5.718s 00:10:29.123 sys 0m0.584s 00:10:29.123 06:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.123 06:20:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.123 06:20:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:29.123 06:20:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:10:29.123 06:20:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.123 06:20:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.123 06:20:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.123 ************************************ 00:10:29.123 START TEST raid_state_function_test 00:10:29.123 ************************************ 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62059 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62059' 00:10:29.123 Process raid pid: 62059 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62059 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62059 ']' 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.123 06:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.383 [2024-11-26 06:20:13.341429] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:29.383 [2024-11-26 06:20:13.341708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.642 [2024-11-26 06:20:13.525023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.642 [2024-11-26 06:20:13.643871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.901 [2024-11-26 06:20:13.850061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.901 [2024-11-26 06:20:13.850216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.161 [2024-11-26 06:20:14.269960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.161 [2024-11-26 06:20:14.270138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.161 [2024-11-26 06:20:14.270155] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.161 [2024-11-26 06:20:14.270183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.161 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.420 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.421 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.421 "name": "Existed_Raid", 00:10:30.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.421 "strip_size_kb": 64, 00:10:30.421 "state": "configuring", 00:10:30.421 "raid_level": "concat", 00:10:30.421 "superblock": false, 00:10:30.421 "num_base_bdevs": 2, 00:10:30.421 "num_base_bdevs_discovered": 0, 00:10:30.421 "num_base_bdevs_operational": 2, 00:10:30.421 "base_bdevs_list": [ 00:10:30.421 { 00:10:30.421 "name": "BaseBdev1", 00:10:30.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.421 "is_configured": false, 00:10:30.421 "data_offset": 0, 00:10:30.421 "data_size": 0 00:10:30.421 }, 00:10:30.421 { 00:10:30.421 "name": "BaseBdev2", 00:10:30.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.421 "is_configured": false, 00:10:30.421 "data_offset": 0, 00:10:30.421 "data_size": 0 00:10:30.421 } 00:10:30.421 ] 00:10:30.421 }' 00:10:30.421 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.421 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.687 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.688 [2024-11-26 06:20:14.749126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.688 [2024-11-26 06:20:14.749256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.688 [2024-11-26 06:20:14.761109] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.688 [2024-11-26 06:20:14.761272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.688 [2024-11-26 06:20:14.761305] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.688 [2024-11-26 06:20:14.761335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.688 [2024-11-26 06:20:14.812660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.688 BaseBdev1 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.688 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.947 [ 00:10:30.947 { 00:10:30.947 "name": "BaseBdev1", 00:10:30.947 "aliases": [ 00:10:30.947 "58db0988-a722-4a7b-924c-719da7825431" 00:10:30.947 ], 00:10:30.947 "product_name": "Malloc disk", 00:10:30.947 "block_size": 512, 00:10:30.947 "num_blocks": 65536, 00:10:30.947 "uuid": "58db0988-a722-4a7b-924c-719da7825431", 00:10:30.947 "assigned_rate_limits": { 00:10:30.947 "rw_ios_per_sec": 0, 00:10:30.947 "rw_mbytes_per_sec": 0, 00:10:30.947 "r_mbytes_per_sec": 0, 00:10:30.947 "w_mbytes_per_sec": 0 00:10:30.947 }, 00:10:30.947 "claimed": true, 00:10:30.947 "claim_type": "exclusive_write", 00:10:30.947 "zoned": false, 00:10:30.947 "supported_io_types": { 00:10:30.947 "read": true, 00:10:30.947 "write": true, 00:10:30.947 "unmap": true, 00:10:30.947 "flush": true, 00:10:30.947 "reset": true, 00:10:30.947 "nvme_admin": false, 00:10:30.947 "nvme_io": false, 00:10:30.947 "nvme_io_md": false, 00:10:30.947 "write_zeroes": true, 00:10:30.947 "zcopy": true, 00:10:30.947 "get_zone_info": false, 00:10:30.947 "zone_management": false, 00:10:30.947 "zone_append": false, 00:10:30.947 "compare": false, 00:10:30.947 "compare_and_write": false, 00:10:30.947 "abort": true, 00:10:30.947 "seek_hole": false, 00:10:30.947 "seek_data": false, 00:10:30.947 "copy": true, 00:10:30.947 "nvme_iov_md": false 00:10:30.947 }, 00:10:30.947 "memory_domains": [ 00:10:30.947 { 00:10:30.947 "dma_device_id": "system", 00:10:30.947 "dma_device_type": 1 00:10:30.947 }, 00:10:30.947 { 00:10:30.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.947 "dma_device_type": 2 00:10:30.947 } 00:10:30.947 ], 00:10:30.947 "driver_specific": {} 00:10:30.947 } 00:10:30.947 ] 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.947 "name": "Existed_Raid", 00:10:30.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.947 "strip_size_kb": 64, 00:10:30.947 "state": "configuring", 00:10:30.947 "raid_level": "concat", 00:10:30.947 "superblock": false, 00:10:30.947 "num_base_bdevs": 2, 00:10:30.947 "num_base_bdevs_discovered": 1, 00:10:30.947 "num_base_bdevs_operational": 2, 00:10:30.947 "base_bdevs_list": [ 00:10:30.947 { 00:10:30.947 "name": "BaseBdev1", 00:10:30.947 "uuid": "58db0988-a722-4a7b-924c-719da7825431", 00:10:30.947 "is_configured": true, 00:10:30.947 "data_offset": 0, 00:10:30.947 "data_size": 65536 00:10:30.947 }, 00:10:30.947 { 00:10:30.947 "name": "BaseBdev2", 00:10:30.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.947 "is_configured": false, 00:10:30.947 "data_offset": 0, 00:10:30.947 "data_size": 0 00:10:30.947 } 00:10:30.947 ] 00:10:30.947 }' 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.947 06:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.206 [2024-11-26 06:20:15.323945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.206 [2024-11-26 06:20:15.324166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.206 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.206 [2024-11-26 06:20:15.336029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.465 [2024-11-26 06:20:15.338315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.465 [2024-11-26 06:20:15.338377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.465 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.466 "name": "Existed_Raid", 00:10:31.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.466 "strip_size_kb": 64, 00:10:31.466 "state": "configuring", 00:10:31.466 "raid_level": "concat", 00:10:31.466 "superblock": false, 00:10:31.466 "num_base_bdevs": 2, 00:10:31.466 "num_base_bdevs_discovered": 1, 00:10:31.466 "num_base_bdevs_operational": 2, 00:10:31.466 "base_bdevs_list": [ 00:10:31.466 { 00:10:31.466 "name": "BaseBdev1", 00:10:31.466 "uuid": "58db0988-a722-4a7b-924c-719da7825431", 00:10:31.466 "is_configured": true, 00:10:31.466 "data_offset": 0, 00:10:31.466 "data_size": 65536 00:10:31.466 }, 00:10:31.466 { 00:10:31.466 "name": "BaseBdev2", 00:10:31.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.466 "is_configured": false, 00:10:31.466 "data_offset": 0, 00:10:31.466 "data_size": 0 00:10:31.466 } 00:10:31.466 ] 00:10:31.466 }' 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.466 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.725 [2024-11-26 06:20:15.829032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.725 [2024-11-26 06:20:15.829212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.725 [2024-11-26 06:20:15.829243] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:31.725 [2024-11-26 06:20:15.829648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:31.725 [2024-11-26 06:20:15.829884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.725 [2024-11-26 06:20:15.829940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:31.725 [2024-11-26 06:20:15.830322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.725 BaseBdev2 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.725 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 [ 00:10:31.984 { 00:10:31.984 "name": "BaseBdev2", 00:10:31.984 "aliases": [ 00:10:31.984 "7655c90e-d299-430c-9a55-522f36545a76" 00:10:31.984 ], 00:10:31.984 "product_name": "Malloc disk", 00:10:31.984 "block_size": 512, 00:10:31.984 "num_blocks": 65536, 00:10:31.984 "uuid": "7655c90e-d299-430c-9a55-522f36545a76", 00:10:31.984 "assigned_rate_limits": { 00:10:31.984 "rw_ios_per_sec": 0, 00:10:31.984 "rw_mbytes_per_sec": 0, 00:10:31.984 "r_mbytes_per_sec": 0, 00:10:31.984 "w_mbytes_per_sec": 0 00:10:31.984 }, 00:10:31.984 "claimed": true, 00:10:31.984 "claim_type": "exclusive_write", 00:10:31.984 "zoned": false, 00:10:31.984 "supported_io_types": { 00:10:31.984 "read": true, 00:10:31.984 "write": true, 00:10:31.984 "unmap": true, 00:10:31.984 "flush": true, 00:10:31.984 "reset": true, 00:10:31.984 "nvme_admin": false, 00:10:31.984 "nvme_io": false, 00:10:31.984 "nvme_io_md": false, 00:10:31.984 "write_zeroes": true, 00:10:31.984 "zcopy": true, 00:10:31.984 "get_zone_info": false, 00:10:31.984 "zone_management": false, 00:10:31.984 "zone_append": false, 00:10:31.985 "compare": false, 00:10:31.985 "compare_and_write": false, 00:10:31.985 "abort": true, 00:10:31.985 "seek_hole": false, 00:10:31.985 "seek_data": false, 00:10:31.985 "copy": true, 00:10:31.985 "nvme_iov_md": false 00:10:31.985 }, 00:10:31.985 "memory_domains": [ 00:10:31.985 { 00:10:31.985 "dma_device_id": "system", 00:10:31.985 "dma_device_type": 1 00:10:31.985 }, 00:10:31.985 { 00:10:31.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.985 "dma_device_type": 2 00:10:31.985 } 00:10:31.985 ], 00:10:31.985 "driver_specific": {} 00:10:31.985 } 00:10:31.985 ] 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.985 "name": "Existed_Raid", 00:10:31.985 "uuid": "417c3ac4-625f-4cd1-91a6-1e4df7d80e54", 00:10:31.985 "strip_size_kb": 64, 00:10:31.985 "state": "online", 00:10:31.985 "raid_level": "concat", 00:10:31.985 "superblock": false, 00:10:31.985 "num_base_bdevs": 2, 00:10:31.985 "num_base_bdevs_discovered": 2, 00:10:31.985 "num_base_bdevs_operational": 2, 00:10:31.985 "base_bdevs_list": [ 00:10:31.985 { 00:10:31.985 "name": "BaseBdev1", 00:10:31.985 "uuid": "58db0988-a722-4a7b-924c-719da7825431", 00:10:31.985 "is_configured": true, 00:10:31.985 "data_offset": 0, 00:10:31.985 "data_size": 65536 00:10:31.985 }, 00:10:31.985 { 00:10:31.985 "name": "BaseBdev2", 00:10:31.985 "uuid": "7655c90e-d299-430c-9a55-522f36545a76", 00:10:31.985 "is_configured": true, 00:10:31.985 "data_offset": 0, 00:10:31.985 "data_size": 65536 00:10:31.985 } 00:10:31.985 ] 00:10:31.985 }' 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.985 06:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.245 [2024-11-26 06:20:16.360577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.245 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.505 "name": "Existed_Raid", 00:10:32.505 "aliases": [ 00:10:32.505 "417c3ac4-625f-4cd1-91a6-1e4df7d80e54" 00:10:32.505 ], 00:10:32.505 "product_name": "Raid Volume", 00:10:32.505 "block_size": 512, 00:10:32.505 "num_blocks": 131072, 00:10:32.505 "uuid": "417c3ac4-625f-4cd1-91a6-1e4df7d80e54", 00:10:32.505 "assigned_rate_limits": { 00:10:32.505 "rw_ios_per_sec": 0, 00:10:32.505 "rw_mbytes_per_sec": 0, 00:10:32.505 "r_mbytes_per_sec": 0, 00:10:32.505 "w_mbytes_per_sec": 0 00:10:32.505 }, 00:10:32.505 "claimed": false, 00:10:32.505 "zoned": false, 00:10:32.505 "supported_io_types": { 00:10:32.505 "read": true, 00:10:32.505 "write": true, 00:10:32.505 "unmap": true, 00:10:32.505 "flush": true, 00:10:32.505 "reset": true, 00:10:32.505 "nvme_admin": false, 00:10:32.505 "nvme_io": false, 00:10:32.505 "nvme_io_md": false, 00:10:32.505 "write_zeroes": true, 00:10:32.505 "zcopy": false, 00:10:32.505 "get_zone_info": false, 00:10:32.505 "zone_management": false, 00:10:32.505 "zone_append": false, 00:10:32.505 "compare": false, 00:10:32.505 "compare_and_write": false, 00:10:32.505 "abort": false, 00:10:32.505 "seek_hole": false, 00:10:32.505 "seek_data": false, 00:10:32.505 "copy": false, 00:10:32.505 "nvme_iov_md": false 00:10:32.505 }, 00:10:32.505 "memory_domains": [ 00:10:32.505 { 00:10:32.505 "dma_device_id": "system", 00:10:32.505 "dma_device_type": 1 00:10:32.505 }, 00:10:32.505 { 00:10:32.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.505 "dma_device_type": 2 00:10:32.505 }, 00:10:32.505 { 00:10:32.505 "dma_device_id": "system", 00:10:32.505 "dma_device_type": 1 00:10:32.505 }, 00:10:32.505 { 00:10:32.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.505 "dma_device_type": 2 00:10:32.505 } 00:10:32.505 ], 00:10:32.505 "driver_specific": { 00:10:32.505 "raid": { 00:10:32.505 "uuid": "417c3ac4-625f-4cd1-91a6-1e4df7d80e54", 00:10:32.505 "strip_size_kb": 64, 00:10:32.505 "state": "online", 00:10:32.505 "raid_level": "concat", 00:10:32.505 "superblock": false, 00:10:32.505 "num_base_bdevs": 2, 00:10:32.505 "num_base_bdevs_discovered": 2, 00:10:32.505 "num_base_bdevs_operational": 2, 00:10:32.505 "base_bdevs_list": [ 00:10:32.505 { 00:10:32.505 "name": "BaseBdev1", 00:10:32.505 "uuid": "58db0988-a722-4a7b-924c-719da7825431", 00:10:32.505 "is_configured": true, 00:10:32.505 "data_offset": 0, 00:10:32.505 "data_size": 65536 00:10:32.505 }, 00:10:32.505 { 00:10:32.505 "name": "BaseBdev2", 00:10:32.505 "uuid": "7655c90e-d299-430c-9a55-522f36545a76", 00:10:32.505 "is_configured": true, 00:10:32.505 "data_offset": 0, 00:10:32.505 "data_size": 65536 00:10:32.505 } 00:10:32.505 ] 00:10:32.505 } 00:10:32.505 } 00:10:32.505 }' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:32.505 BaseBdev2' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.505 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.505 [2024-11-26 06:20:16.611987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.505 [2024-11-26 06:20:16.612174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.505 [2024-11-26 06:20:16.612268] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.765 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.765 "name": "Existed_Raid", 00:10:32.765 "uuid": "417c3ac4-625f-4cd1-91a6-1e4df7d80e54", 00:10:32.765 "strip_size_kb": 64, 00:10:32.765 "state": "offline", 00:10:32.765 "raid_level": "concat", 00:10:32.765 "superblock": false, 00:10:32.765 "num_base_bdevs": 2, 00:10:32.765 "num_base_bdevs_discovered": 1, 00:10:32.765 "num_base_bdevs_operational": 1, 00:10:32.766 "base_bdevs_list": [ 00:10:32.766 { 00:10:32.766 "name": null, 00:10:32.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.766 "is_configured": false, 00:10:32.766 "data_offset": 0, 00:10:32.766 "data_size": 65536 00:10:32.766 }, 00:10:32.766 { 00:10:32.766 "name": "BaseBdev2", 00:10:32.766 "uuid": "7655c90e-d299-430c-9a55-522f36545a76", 00:10:32.766 "is_configured": true, 00:10:32.766 "data_offset": 0, 00:10:32.766 "data_size": 65536 00:10:32.766 } 00:10:32.766 ] 00:10:32.766 }' 00:10:32.766 06:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.766 06:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.026 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.286 [2024-11-26 06:20:17.168169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.286 [2024-11-26 06:20:17.168255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62059 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62059 ']' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62059 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62059 00:10:33.286 killing process with pid 62059 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62059' 00:10:33.286 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62059 00:10:33.287 06:20:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62059 00:10:33.287 [2024-11-26 06:20:17.383831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.287 [2024-11-26 06:20:17.404680] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.668 ************************************ 00:10:34.668 END TEST raid_state_function_test 00:10:34.668 ************************************ 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.668 00:10:34.668 real 0m5.488s 00:10:34.668 user 0m7.781s 00:10:34.668 sys 0m0.932s 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.668 06:20:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:10:34.668 06:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.668 06:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.668 06:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.668 ************************************ 00:10:34.668 START TEST raid_state_function_test_sb 00:10:34.668 ************************************ 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.668 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.669 Process raid pid: 62312 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62312 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62312' 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62312 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62312 ']' 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.669 06:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.928 [2024-11-26 06:20:18.889900] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:34.928 [2024-11-26 06:20:18.890225] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.188 [2024-11-26 06:20:19.079191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.188 [2024-11-26 06:20:19.241919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.447 [2024-11-26 06:20:19.518280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.447 [2024-11-26 06:20:19.518478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.708 [2024-11-26 06:20:19.809846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.708 [2024-11-26 06:20:19.809998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.708 [2024-11-26 06:20:19.810035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.708 [2024-11-26 06:20:19.810116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.708 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.967 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.967 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.967 "name": "Existed_Raid", 00:10:35.967 "uuid": "665e63f5-e27b-4f14-8192-cebc6abc3481", 00:10:35.967 "strip_size_kb": 64, 00:10:35.967 "state": "configuring", 00:10:35.967 "raid_level": "concat", 00:10:35.967 "superblock": true, 00:10:35.967 "num_base_bdevs": 2, 00:10:35.967 "num_base_bdevs_discovered": 0, 00:10:35.967 "num_base_bdevs_operational": 2, 00:10:35.967 "base_bdevs_list": [ 00:10:35.967 { 00:10:35.967 "name": "BaseBdev1", 00:10:35.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.967 "is_configured": false, 00:10:35.967 "data_offset": 0, 00:10:35.967 "data_size": 0 00:10:35.967 }, 00:10:35.967 { 00:10:35.967 "name": "BaseBdev2", 00:10:35.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.967 "is_configured": false, 00:10:35.968 "data_offset": 0, 00:10:35.968 "data_size": 0 00:10:35.968 } 00:10:35.968 ] 00:10:35.968 }' 00:10:35.968 06:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.968 06:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.227 [2024-11-26 06:20:20.249042] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.227 [2024-11-26 06:20:20.249164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.227 [2024-11-26 06:20:20.257017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.227 [2024-11-26 06:20:20.257073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.227 [2024-11-26 06:20:20.257085] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.227 [2024-11-26 06:20:20.257101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.227 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.227 [2024-11-26 06:20:20.311714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.228 BaseBdev1 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.228 [ 00:10:36.228 { 00:10:36.228 "name": "BaseBdev1", 00:10:36.228 "aliases": [ 00:10:36.228 "211e59c7-b608-48de-bfc2-89506d9d2b2e" 00:10:36.228 ], 00:10:36.228 "product_name": "Malloc disk", 00:10:36.228 "block_size": 512, 00:10:36.228 "num_blocks": 65536, 00:10:36.228 "uuid": "211e59c7-b608-48de-bfc2-89506d9d2b2e", 00:10:36.228 "assigned_rate_limits": { 00:10:36.228 "rw_ios_per_sec": 0, 00:10:36.228 "rw_mbytes_per_sec": 0, 00:10:36.228 "r_mbytes_per_sec": 0, 00:10:36.228 "w_mbytes_per_sec": 0 00:10:36.228 }, 00:10:36.228 "claimed": true, 00:10:36.228 "claim_type": "exclusive_write", 00:10:36.228 "zoned": false, 00:10:36.228 "supported_io_types": { 00:10:36.228 "read": true, 00:10:36.228 "write": true, 00:10:36.228 "unmap": true, 00:10:36.228 "flush": true, 00:10:36.228 "reset": true, 00:10:36.228 "nvme_admin": false, 00:10:36.228 "nvme_io": false, 00:10:36.228 "nvme_io_md": false, 00:10:36.228 "write_zeroes": true, 00:10:36.228 "zcopy": true, 00:10:36.228 "get_zone_info": false, 00:10:36.228 "zone_management": false, 00:10:36.228 "zone_append": false, 00:10:36.228 "compare": false, 00:10:36.228 "compare_and_write": false, 00:10:36.228 "abort": true, 00:10:36.228 "seek_hole": false, 00:10:36.228 "seek_data": false, 00:10:36.228 "copy": true, 00:10:36.228 "nvme_iov_md": false 00:10:36.228 }, 00:10:36.228 "memory_domains": [ 00:10:36.228 { 00:10:36.228 "dma_device_id": "system", 00:10:36.228 "dma_device_type": 1 00:10:36.228 }, 00:10:36.228 { 00:10:36.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.228 "dma_device_type": 2 00:10:36.228 } 00:10:36.228 ], 00:10:36.228 "driver_specific": {} 00:10:36.228 } 00:10:36.228 ] 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.228 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.488 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.488 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.489 "name": "Existed_Raid", 00:10:36.489 "uuid": "89eb20f3-c043-4c82-8bfa-ba08958ff424", 00:10:36.489 "strip_size_kb": 64, 00:10:36.489 "state": "configuring", 00:10:36.489 "raid_level": "concat", 00:10:36.489 "superblock": true, 00:10:36.489 "num_base_bdevs": 2, 00:10:36.489 "num_base_bdevs_discovered": 1, 00:10:36.489 "num_base_bdevs_operational": 2, 00:10:36.489 "base_bdevs_list": [ 00:10:36.489 { 00:10:36.489 "name": "BaseBdev1", 00:10:36.489 "uuid": "211e59c7-b608-48de-bfc2-89506d9d2b2e", 00:10:36.489 "is_configured": true, 00:10:36.489 "data_offset": 2048, 00:10:36.489 "data_size": 63488 00:10:36.489 }, 00:10:36.489 { 00:10:36.489 "name": "BaseBdev2", 00:10:36.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.489 "is_configured": false, 00:10:36.489 "data_offset": 0, 00:10:36.489 "data_size": 0 00:10:36.489 } 00:10:36.489 ] 00:10:36.489 }' 00:10:36.489 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.489 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.748 [2024-11-26 06:20:20.830932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.748 [2024-11-26 06:20:20.831085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.748 [2024-11-26 06:20:20.839017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.748 [2024-11-26 06:20:20.841659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.748 [2024-11-26 06:20:20.841780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.748 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.062 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.062 "name": "Existed_Raid", 00:10:37.062 "uuid": "9013b591-0495-4b40-b11f-2e5bf93af6e3", 00:10:37.062 "strip_size_kb": 64, 00:10:37.062 "state": "configuring", 00:10:37.062 "raid_level": "concat", 00:10:37.062 "superblock": true, 00:10:37.062 "num_base_bdevs": 2, 00:10:37.062 "num_base_bdevs_discovered": 1, 00:10:37.062 "num_base_bdevs_operational": 2, 00:10:37.062 "base_bdevs_list": [ 00:10:37.062 { 00:10:37.062 "name": "BaseBdev1", 00:10:37.062 "uuid": "211e59c7-b608-48de-bfc2-89506d9d2b2e", 00:10:37.062 "is_configured": true, 00:10:37.062 "data_offset": 2048, 00:10:37.062 "data_size": 63488 00:10:37.062 }, 00:10:37.062 { 00:10:37.062 "name": "BaseBdev2", 00:10:37.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.062 "is_configured": false, 00:10:37.062 "data_offset": 0, 00:10:37.062 "data_size": 0 00:10:37.062 } 00:10:37.062 ] 00:10:37.062 }' 00:10:37.062 06:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.062 06:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 [2024-11-26 06:20:21.367389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.322 [2024-11-26 06:20:21.367917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:37.322 [2024-11-26 06:20:21.367985] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:37.322 [2024-11-26 06:20:21.368419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:37.322 BaseBdev2 00:10:37.322 [2024-11-26 06:20:21.368665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:37.322 [2024-11-26 06:20:21.368685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:37.322 [2024-11-26 06:20:21.368899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 [ 00:10:37.322 { 00:10:37.322 "name": "BaseBdev2", 00:10:37.322 "aliases": [ 00:10:37.322 "0ad7cbe8-62a8-4f62-9125-4be1b5b04910" 00:10:37.322 ], 00:10:37.322 "product_name": "Malloc disk", 00:10:37.322 "block_size": 512, 00:10:37.322 "num_blocks": 65536, 00:10:37.322 "uuid": "0ad7cbe8-62a8-4f62-9125-4be1b5b04910", 00:10:37.322 "assigned_rate_limits": { 00:10:37.322 "rw_ios_per_sec": 0, 00:10:37.322 "rw_mbytes_per_sec": 0, 00:10:37.322 "r_mbytes_per_sec": 0, 00:10:37.322 "w_mbytes_per_sec": 0 00:10:37.322 }, 00:10:37.322 "claimed": true, 00:10:37.322 "claim_type": "exclusive_write", 00:10:37.322 "zoned": false, 00:10:37.322 "supported_io_types": { 00:10:37.322 "read": true, 00:10:37.322 "write": true, 00:10:37.322 "unmap": true, 00:10:37.322 "flush": true, 00:10:37.322 "reset": true, 00:10:37.322 "nvme_admin": false, 00:10:37.322 "nvme_io": false, 00:10:37.322 "nvme_io_md": false, 00:10:37.322 "write_zeroes": true, 00:10:37.322 "zcopy": true, 00:10:37.322 "get_zone_info": false, 00:10:37.322 "zone_management": false, 00:10:37.322 "zone_append": false, 00:10:37.322 "compare": false, 00:10:37.322 "compare_and_write": false, 00:10:37.322 "abort": true, 00:10:37.322 "seek_hole": false, 00:10:37.322 "seek_data": false, 00:10:37.322 "copy": true, 00:10:37.322 "nvme_iov_md": false 00:10:37.322 }, 00:10:37.322 "memory_domains": [ 00:10:37.322 { 00:10:37.322 "dma_device_id": "system", 00:10:37.322 "dma_device_type": 1 00:10:37.322 }, 00:10:37.322 { 00:10:37.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.322 "dma_device_type": 2 00:10:37.322 } 00:10:37.322 ], 00:10:37.322 "driver_specific": {} 00:10:37.322 } 00:10:37.322 ] 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.322 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.581 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.581 "name": "Existed_Raid", 00:10:37.581 "uuid": "9013b591-0495-4b40-b11f-2e5bf93af6e3", 00:10:37.581 "strip_size_kb": 64, 00:10:37.581 "state": "online", 00:10:37.581 "raid_level": "concat", 00:10:37.581 "superblock": true, 00:10:37.581 "num_base_bdevs": 2, 00:10:37.581 "num_base_bdevs_discovered": 2, 00:10:37.581 "num_base_bdevs_operational": 2, 00:10:37.581 "base_bdevs_list": [ 00:10:37.581 { 00:10:37.581 "name": "BaseBdev1", 00:10:37.581 "uuid": "211e59c7-b608-48de-bfc2-89506d9d2b2e", 00:10:37.581 "is_configured": true, 00:10:37.581 "data_offset": 2048, 00:10:37.581 "data_size": 63488 00:10:37.581 }, 00:10:37.581 { 00:10:37.581 "name": "BaseBdev2", 00:10:37.581 "uuid": "0ad7cbe8-62a8-4f62-9125-4be1b5b04910", 00:10:37.581 "is_configured": true, 00:10:37.581 "data_offset": 2048, 00:10:37.581 "data_size": 63488 00:10:37.581 } 00:10:37.581 ] 00:10:37.581 }' 00:10:37.581 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.581 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.841 [2024-11-26 06:20:21.930901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.841 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.841 "name": "Existed_Raid", 00:10:37.841 "aliases": [ 00:10:37.841 "9013b591-0495-4b40-b11f-2e5bf93af6e3" 00:10:37.841 ], 00:10:37.841 "product_name": "Raid Volume", 00:10:37.841 "block_size": 512, 00:10:37.841 "num_blocks": 126976, 00:10:37.841 "uuid": "9013b591-0495-4b40-b11f-2e5bf93af6e3", 00:10:37.841 "assigned_rate_limits": { 00:10:37.841 "rw_ios_per_sec": 0, 00:10:37.841 "rw_mbytes_per_sec": 0, 00:10:37.841 "r_mbytes_per_sec": 0, 00:10:37.841 "w_mbytes_per_sec": 0 00:10:37.841 }, 00:10:37.841 "claimed": false, 00:10:37.841 "zoned": false, 00:10:37.841 "supported_io_types": { 00:10:37.841 "read": true, 00:10:37.841 "write": true, 00:10:37.841 "unmap": true, 00:10:37.841 "flush": true, 00:10:37.841 "reset": true, 00:10:37.841 "nvme_admin": false, 00:10:37.841 "nvme_io": false, 00:10:37.841 "nvme_io_md": false, 00:10:37.841 "write_zeroes": true, 00:10:37.841 "zcopy": false, 00:10:37.841 "get_zone_info": false, 00:10:37.841 "zone_management": false, 00:10:37.841 "zone_append": false, 00:10:37.841 "compare": false, 00:10:37.841 "compare_and_write": false, 00:10:37.841 "abort": false, 00:10:37.841 "seek_hole": false, 00:10:37.841 "seek_data": false, 00:10:37.841 "copy": false, 00:10:37.841 "nvme_iov_md": false 00:10:37.841 }, 00:10:37.841 "memory_domains": [ 00:10:37.841 { 00:10:37.841 "dma_device_id": "system", 00:10:37.841 "dma_device_type": 1 00:10:37.841 }, 00:10:37.841 { 00:10:37.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.841 "dma_device_type": 2 00:10:37.841 }, 00:10:37.841 { 00:10:37.841 "dma_device_id": "system", 00:10:37.841 "dma_device_type": 1 00:10:37.841 }, 00:10:37.841 { 00:10:37.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.841 "dma_device_type": 2 00:10:37.841 } 00:10:37.841 ], 00:10:37.841 "driver_specific": { 00:10:37.841 "raid": { 00:10:37.841 "uuid": "9013b591-0495-4b40-b11f-2e5bf93af6e3", 00:10:37.841 "strip_size_kb": 64, 00:10:37.841 "state": "online", 00:10:37.841 "raid_level": "concat", 00:10:37.841 "superblock": true, 00:10:37.841 "num_base_bdevs": 2, 00:10:37.841 "num_base_bdevs_discovered": 2, 00:10:37.841 "num_base_bdevs_operational": 2, 00:10:37.841 "base_bdevs_list": [ 00:10:37.841 { 00:10:37.841 "name": "BaseBdev1", 00:10:37.841 "uuid": "211e59c7-b608-48de-bfc2-89506d9d2b2e", 00:10:37.841 "is_configured": true, 00:10:37.841 "data_offset": 2048, 00:10:37.841 "data_size": 63488 00:10:37.841 }, 00:10:37.841 { 00:10:37.841 "name": "BaseBdev2", 00:10:37.841 "uuid": "0ad7cbe8-62a8-4f62-9125-4be1b5b04910", 00:10:37.841 "is_configured": true, 00:10:37.841 "data_offset": 2048, 00:10:37.841 "data_size": 63488 00:10:37.841 } 00:10:37.841 ] 00:10:37.841 } 00:10:37.841 } 00:10:37.841 }' 00:10:38.101 06:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:38.101 BaseBdev2' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.101 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.101 [2024-11-26 06:20:22.162267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.101 [2024-11-26 06:20:22.162314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.101 [2024-11-26 06:20:22.162391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.361 "name": "Existed_Raid", 00:10:38.361 "uuid": "9013b591-0495-4b40-b11f-2e5bf93af6e3", 00:10:38.361 "strip_size_kb": 64, 00:10:38.361 "state": "offline", 00:10:38.361 "raid_level": "concat", 00:10:38.361 "superblock": true, 00:10:38.361 "num_base_bdevs": 2, 00:10:38.361 "num_base_bdevs_discovered": 1, 00:10:38.361 "num_base_bdevs_operational": 1, 00:10:38.361 "base_bdevs_list": [ 00:10:38.361 { 00:10:38.361 "name": null, 00:10:38.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.361 "is_configured": false, 00:10:38.361 "data_offset": 0, 00:10:38.361 "data_size": 63488 00:10:38.361 }, 00:10:38.361 { 00:10:38.361 "name": "BaseBdev2", 00:10:38.361 "uuid": "0ad7cbe8-62a8-4f62-9125-4be1b5b04910", 00:10:38.361 "is_configured": true, 00:10:38.361 "data_offset": 2048, 00:10:38.361 "data_size": 63488 00:10:38.361 } 00:10:38.361 ] 00:10:38.361 }' 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.361 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.621 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 [2024-11-26 06:20:22.757864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.881 [2024-11-26 06:20:22.758018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62312 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62312 ']' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62312 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62312 00:10:38.881 killing process with pid 62312 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62312' 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62312 00:10:38.881 [2024-11-26 06:20:22.983627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.881 06:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62312 00:10:38.881 [2024-11-26 06:20:23.004665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.262 06:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.262 00:10:40.262 real 0m5.550s 00:10:40.262 user 0m7.796s 00:10:40.262 sys 0m1.030s 00:10:40.262 06:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.262 06:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 ************************************ 00:10:40.262 END TEST raid_state_function_test_sb 00:10:40.262 ************************************ 00:10:40.262 06:20:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:40.262 06:20:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:40.262 06:20:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.262 06:20:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:40.521 ************************************ 00:10:40.521 START TEST raid_superblock_test 00:10:40.521 ************************************ 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62574 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62574 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62574 ']' 00:10:40.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.521 06:20:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.521 [2024-11-26 06:20:24.492553] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:40.521 [2024-11-26 06:20:24.492705] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62574 ] 00:10:40.841 [2024-11-26 06:20:24.678991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.841 [2024-11-26 06:20:24.827213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.118 [2024-11-26 06:20:25.079189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.118 [2024-11-26 06:20:25.079249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.378 malloc1 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.378 [2024-11-26 06:20:25.409935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:41.378 [2024-11-26 06:20:25.410105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.378 [2024-11-26 06:20:25.410159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:41.378 [2024-11-26 06:20:25.410230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.378 [2024-11-26 06:20:25.412882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.378 [2024-11-26 06:20:25.412979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:41.378 pt1 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.378 malloc2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.378 [2024-11-26 06:20:25.478240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:41.378 [2024-11-26 06:20:25.478374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.378 [2024-11-26 06:20:25.478423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:41.378 [2024-11-26 06:20:25.478478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.378 [2024-11-26 06:20:25.481381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.378 [2024-11-26 06:20:25.481470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:41.378 pt2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.378 [2024-11-26 06:20:25.490376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:41.378 [2024-11-26 06:20:25.492637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:41.378 [2024-11-26 06:20:25.492824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:41.378 [2024-11-26 06:20:25.492838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:41.378 [2024-11-26 06:20:25.493181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:41.378 [2024-11-26 06:20:25.493407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:41.378 [2024-11-26 06:20:25.493430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:41.378 [2024-11-26 06:20:25.493638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.378 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.639 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.639 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.639 "name": "raid_bdev1", 00:10:41.639 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:41.639 "strip_size_kb": 64, 00:10:41.639 "state": "online", 00:10:41.639 "raid_level": "concat", 00:10:41.639 "superblock": true, 00:10:41.639 "num_base_bdevs": 2, 00:10:41.639 "num_base_bdevs_discovered": 2, 00:10:41.639 "num_base_bdevs_operational": 2, 00:10:41.639 "base_bdevs_list": [ 00:10:41.639 { 00:10:41.639 "name": "pt1", 00:10:41.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.639 "is_configured": true, 00:10:41.639 "data_offset": 2048, 00:10:41.639 "data_size": 63488 00:10:41.639 }, 00:10:41.639 { 00:10:41.639 "name": "pt2", 00:10:41.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.639 "is_configured": true, 00:10:41.639 "data_offset": 2048, 00:10:41.639 "data_size": 63488 00:10:41.639 } 00:10:41.639 ] 00:10:41.639 }' 00:10:41.639 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.639 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.899 [2024-11-26 06:20:25.953937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:41.899 "name": "raid_bdev1", 00:10:41.899 "aliases": [ 00:10:41.899 "1a59e298-8cae-4389-b335-71357de73340" 00:10:41.899 ], 00:10:41.899 "product_name": "Raid Volume", 00:10:41.899 "block_size": 512, 00:10:41.899 "num_blocks": 126976, 00:10:41.899 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:41.899 "assigned_rate_limits": { 00:10:41.899 "rw_ios_per_sec": 0, 00:10:41.899 "rw_mbytes_per_sec": 0, 00:10:41.899 "r_mbytes_per_sec": 0, 00:10:41.899 "w_mbytes_per_sec": 0 00:10:41.899 }, 00:10:41.899 "claimed": false, 00:10:41.899 "zoned": false, 00:10:41.899 "supported_io_types": { 00:10:41.899 "read": true, 00:10:41.899 "write": true, 00:10:41.899 "unmap": true, 00:10:41.899 "flush": true, 00:10:41.899 "reset": true, 00:10:41.899 "nvme_admin": false, 00:10:41.899 "nvme_io": false, 00:10:41.899 "nvme_io_md": false, 00:10:41.899 "write_zeroes": true, 00:10:41.899 "zcopy": false, 00:10:41.899 "get_zone_info": false, 00:10:41.899 "zone_management": false, 00:10:41.899 "zone_append": false, 00:10:41.899 "compare": false, 00:10:41.899 "compare_and_write": false, 00:10:41.899 "abort": false, 00:10:41.899 "seek_hole": false, 00:10:41.899 "seek_data": false, 00:10:41.899 "copy": false, 00:10:41.899 "nvme_iov_md": false 00:10:41.899 }, 00:10:41.899 "memory_domains": [ 00:10:41.899 { 00:10:41.899 "dma_device_id": "system", 00:10:41.899 "dma_device_type": 1 00:10:41.899 }, 00:10:41.899 { 00:10:41.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.899 "dma_device_type": 2 00:10:41.899 }, 00:10:41.899 { 00:10:41.899 "dma_device_id": "system", 00:10:41.899 "dma_device_type": 1 00:10:41.899 }, 00:10:41.899 { 00:10:41.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.899 "dma_device_type": 2 00:10:41.899 } 00:10:41.899 ], 00:10:41.899 "driver_specific": { 00:10:41.899 "raid": { 00:10:41.899 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:41.899 "strip_size_kb": 64, 00:10:41.899 "state": "online", 00:10:41.899 "raid_level": "concat", 00:10:41.899 "superblock": true, 00:10:41.899 "num_base_bdevs": 2, 00:10:41.899 "num_base_bdevs_discovered": 2, 00:10:41.899 "num_base_bdevs_operational": 2, 00:10:41.899 "base_bdevs_list": [ 00:10:41.899 { 00:10:41.899 "name": "pt1", 00:10:41.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:41.899 "is_configured": true, 00:10:41.899 "data_offset": 2048, 00:10:41.899 "data_size": 63488 00:10:41.899 }, 00:10:41.899 { 00:10:41.899 "name": "pt2", 00:10:41.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:41.899 "is_configured": true, 00:10:41.899 "data_offset": 2048, 00:10:41.899 "data_size": 63488 00:10:41.899 } 00:10:41.899 ] 00:10:41.899 } 00:10:41.899 } 00:10:41.899 }' 00:10:41.899 06:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.899 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:41.899 pt2' 00:10:41.899 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:42.158 [2024-11-26 06:20:26.177603] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1a59e298-8cae-4389-b335-71357de73340 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1a59e298-8cae-4389-b335-71357de73340 ']' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 [2024-11-26 06:20:26.209225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.158 [2024-11-26 06:20:26.209273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.158 [2024-11-26 06:20:26.209396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.158 [2024-11-26 06:20:26.209461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.158 [2024-11-26 06:20:26.209480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.158 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.417 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.418 [2024-11-26 06:20:26.349041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:42.418 [2024-11-26 06:20:26.351246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:42.418 [2024-11-26 06:20:26.351421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:42.418 [2024-11-26 06:20:26.351491] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:42.418 [2024-11-26 06:20:26.351509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.418 [2024-11-26 06:20:26.351522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:42.418 request: 00:10:42.418 { 00:10:42.418 "name": "raid_bdev1", 00:10:42.418 "raid_level": "concat", 00:10:42.418 "base_bdevs": [ 00:10:42.418 "malloc1", 00:10:42.418 "malloc2" 00:10:42.418 ], 00:10:42.418 "strip_size_kb": 64, 00:10:42.418 "superblock": false, 00:10:42.418 "method": "bdev_raid_create", 00:10:42.418 "req_id": 1 00:10:42.418 } 00:10:42.418 Got JSON-RPC error response 00:10:42.418 response: 00:10:42.418 { 00:10:42.418 "code": -17, 00:10:42.418 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:42.418 } 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.418 [2024-11-26 06:20:26.416889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.418 [2024-11-26 06:20:26.417072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.418 [2024-11-26 06:20:26.417119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:42.418 [2024-11-26 06:20:26.417161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.418 [2024-11-26 06:20:26.419718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.418 [2024-11-26 06:20:26.419845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.418 [2024-11-26 06:20:26.419999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:42.418 [2024-11-26 06:20:26.420135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.418 pt1 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.418 "name": "raid_bdev1", 00:10:42.418 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:42.418 "strip_size_kb": 64, 00:10:42.418 "state": "configuring", 00:10:42.418 "raid_level": "concat", 00:10:42.418 "superblock": true, 00:10:42.418 "num_base_bdevs": 2, 00:10:42.418 "num_base_bdevs_discovered": 1, 00:10:42.418 "num_base_bdevs_operational": 2, 00:10:42.418 "base_bdevs_list": [ 00:10:42.418 { 00:10:42.418 "name": "pt1", 00:10:42.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.418 "is_configured": true, 00:10:42.418 "data_offset": 2048, 00:10:42.418 "data_size": 63488 00:10:42.418 }, 00:10:42.418 { 00:10:42.418 "name": null, 00:10:42.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.418 "is_configured": false, 00:10:42.418 "data_offset": 2048, 00:10:42.418 "data_size": 63488 00:10:42.418 } 00:10:42.418 ] 00:10:42.418 }' 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.418 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.987 [2024-11-26 06:20:26.896109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:42.987 [2024-11-26 06:20:26.896285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.987 [2024-11-26 06:20:26.896317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:42.987 [2024-11-26 06:20:26.896330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.987 [2024-11-26 06:20:26.896861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.987 [2024-11-26 06:20:26.896886] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:42.987 [2024-11-26 06:20:26.896981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:42.987 [2024-11-26 06:20:26.897009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:42.987 [2024-11-26 06:20:26.897169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:42.987 [2024-11-26 06:20:26.897184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:42.987 [2024-11-26 06:20:26.897467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:42.987 [2024-11-26 06:20:26.897655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:42.987 [2024-11-26 06:20:26.897665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:42.987 [2024-11-26 06:20:26.897832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.987 pt2 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.987 "name": "raid_bdev1", 00:10:42.987 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:42.987 "strip_size_kb": 64, 00:10:42.987 "state": "online", 00:10:42.987 "raid_level": "concat", 00:10:42.987 "superblock": true, 00:10:42.987 "num_base_bdevs": 2, 00:10:42.987 "num_base_bdevs_discovered": 2, 00:10:42.987 "num_base_bdevs_operational": 2, 00:10:42.987 "base_bdevs_list": [ 00:10:42.987 { 00:10:42.987 "name": "pt1", 00:10:42.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.987 "is_configured": true, 00:10:42.987 "data_offset": 2048, 00:10:42.987 "data_size": 63488 00:10:42.987 }, 00:10:42.987 { 00:10:42.987 "name": "pt2", 00:10:42.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.987 "is_configured": true, 00:10:42.987 "data_offset": 2048, 00:10:42.987 "data_size": 63488 00:10:42.987 } 00:10:42.987 ] 00:10:42.987 }' 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.987 06:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.246 [2024-11-26 06:20:27.355608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.246 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.506 "name": "raid_bdev1", 00:10:43.506 "aliases": [ 00:10:43.506 "1a59e298-8cae-4389-b335-71357de73340" 00:10:43.506 ], 00:10:43.506 "product_name": "Raid Volume", 00:10:43.506 "block_size": 512, 00:10:43.506 "num_blocks": 126976, 00:10:43.506 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:43.506 "assigned_rate_limits": { 00:10:43.506 "rw_ios_per_sec": 0, 00:10:43.506 "rw_mbytes_per_sec": 0, 00:10:43.506 "r_mbytes_per_sec": 0, 00:10:43.506 "w_mbytes_per_sec": 0 00:10:43.506 }, 00:10:43.506 "claimed": false, 00:10:43.506 "zoned": false, 00:10:43.506 "supported_io_types": { 00:10:43.506 "read": true, 00:10:43.506 "write": true, 00:10:43.506 "unmap": true, 00:10:43.506 "flush": true, 00:10:43.506 "reset": true, 00:10:43.506 "nvme_admin": false, 00:10:43.506 "nvme_io": false, 00:10:43.506 "nvme_io_md": false, 00:10:43.506 "write_zeroes": true, 00:10:43.506 "zcopy": false, 00:10:43.506 "get_zone_info": false, 00:10:43.506 "zone_management": false, 00:10:43.506 "zone_append": false, 00:10:43.506 "compare": false, 00:10:43.506 "compare_and_write": false, 00:10:43.506 "abort": false, 00:10:43.506 "seek_hole": false, 00:10:43.506 "seek_data": false, 00:10:43.506 "copy": false, 00:10:43.506 "nvme_iov_md": false 00:10:43.506 }, 00:10:43.506 "memory_domains": [ 00:10:43.506 { 00:10:43.506 "dma_device_id": "system", 00:10:43.506 "dma_device_type": 1 00:10:43.506 }, 00:10:43.506 { 00:10:43.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.506 "dma_device_type": 2 00:10:43.506 }, 00:10:43.506 { 00:10:43.506 "dma_device_id": "system", 00:10:43.506 "dma_device_type": 1 00:10:43.506 }, 00:10:43.506 { 00:10:43.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.506 "dma_device_type": 2 00:10:43.506 } 00:10:43.506 ], 00:10:43.506 "driver_specific": { 00:10:43.506 "raid": { 00:10:43.506 "uuid": "1a59e298-8cae-4389-b335-71357de73340", 00:10:43.506 "strip_size_kb": 64, 00:10:43.506 "state": "online", 00:10:43.506 "raid_level": "concat", 00:10:43.506 "superblock": true, 00:10:43.506 "num_base_bdevs": 2, 00:10:43.506 "num_base_bdevs_discovered": 2, 00:10:43.506 "num_base_bdevs_operational": 2, 00:10:43.506 "base_bdevs_list": [ 00:10:43.506 { 00:10:43.506 "name": "pt1", 00:10:43.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.506 "is_configured": true, 00:10:43.506 "data_offset": 2048, 00:10:43.506 "data_size": 63488 00:10:43.506 }, 00:10:43.506 { 00:10:43.506 "name": "pt2", 00:10:43.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.506 "is_configured": true, 00:10:43.506 "data_offset": 2048, 00:10:43.506 "data_size": 63488 00:10:43.506 } 00:10:43.506 ] 00:10:43.506 } 00:10:43.506 } 00:10:43.506 }' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.506 pt2' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.506 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 [2024-11-26 06:20:27.563339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1a59e298-8cae-4389-b335-71357de73340 '!=' 1a59e298-8cae-4389-b335-71357de73340 ']' 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62574 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62574 ']' 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62574 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62574 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62574' 00:10:43.507 killing process with pid 62574 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62574 00:10:43.507 [2024-11-26 06:20:27.632842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.507 06:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62574 00:10:43.507 [2024-11-26 06:20:27.633087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.507 [2024-11-26 06:20:27.633173] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.507 [2024-11-26 06:20:27.633200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:43.767 [2024-11-26 06:20:27.882862] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.173 06:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:45.173 00:10:45.173 real 0m4.763s 00:10:45.173 user 0m6.452s 00:10:45.173 sys 0m0.896s 00:10:45.173 06:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.173 06:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.173 ************************************ 00:10:45.173 END TEST raid_superblock_test 00:10:45.173 ************************************ 00:10:45.173 06:20:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:45.173 06:20:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.173 06:20:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.173 06:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.173 ************************************ 00:10:45.173 START TEST raid_read_error_test 00:10:45.173 ************************************ 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kntzSvHn7w 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62781 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62781 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62781 ']' 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.173 06:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.431 [2024-11-26 06:20:29.362313] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:45.431 [2024-11-26 06:20:29.362623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62781 ] 00:10:45.431 [2024-11-26 06:20:29.542135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.689 [2024-11-26 06:20:29.683490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.946 [2024-11-26 06:20:29.927572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.946 [2024-11-26 06:20:29.927661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.205 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.205 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.205 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.206 BaseBdev1_malloc 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.206 true 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.206 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.465 [2024-11-26 06:20:30.340048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:46.465 [2024-11-26 06:20:30.340256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.465 [2024-11-26 06:20:30.340290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:46.465 [2024-11-26 06:20:30.340304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.465 [2024-11-26 06:20:30.343079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.465 [2024-11-26 06:20:30.343146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:46.465 BaseBdev1 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.465 BaseBdev2_malloc 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.465 true 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.465 [2024-11-26 06:20:30.414706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:46.465 [2024-11-26 06:20:30.414797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.465 [2024-11-26 06:20:30.414824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:46.465 [2024-11-26 06:20:30.414837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.465 [2024-11-26 06:20:30.417616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.465 [2024-11-26 06:20:30.417776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:46.465 BaseBdev2 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.465 [2024-11-26 06:20:30.426879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.465 [2024-11-26 06:20:30.429292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.465 [2024-11-26 06:20:30.429578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:46.465 [2024-11-26 06:20:30.429599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:46.465 [2024-11-26 06:20:30.429929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:46.465 [2024-11-26 06:20:30.430178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:46.465 [2024-11-26 06:20:30.430194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:46.465 [2024-11-26 06:20:30.430431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.465 "name": "raid_bdev1", 00:10:46.465 "uuid": "4ad54199-437e-4e7f-8f73-cd64caade98b", 00:10:46.465 "strip_size_kb": 64, 00:10:46.465 "state": "online", 00:10:46.465 "raid_level": "concat", 00:10:46.465 "superblock": true, 00:10:46.465 "num_base_bdevs": 2, 00:10:46.465 "num_base_bdevs_discovered": 2, 00:10:46.465 "num_base_bdevs_operational": 2, 00:10:46.465 "base_bdevs_list": [ 00:10:46.465 { 00:10:46.465 "name": "BaseBdev1", 00:10:46.465 "uuid": "1a3ea235-75a6-5c35-9cd8-a6f817758e55", 00:10:46.465 "is_configured": true, 00:10:46.465 "data_offset": 2048, 00:10:46.465 "data_size": 63488 00:10:46.465 }, 00:10:46.465 { 00:10:46.465 "name": "BaseBdev2", 00:10:46.465 "uuid": "23b3e87b-9600-5bb7-ac9f-6d7c6384e2e0", 00:10:46.465 "is_configured": true, 00:10:46.465 "data_offset": 2048, 00:10:46.465 "data_size": 63488 00:10:46.465 } 00:10:46.465 ] 00:10:46.465 }' 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.465 06:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.031 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:47.031 06:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:47.031 [2024-11-26 06:20:30.971506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.970 "name": "raid_bdev1", 00:10:47.970 "uuid": "4ad54199-437e-4e7f-8f73-cd64caade98b", 00:10:47.970 "strip_size_kb": 64, 00:10:47.970 "state": "online", 00:10:47.970 "raid_level": "concat", 00:10:47.970 "superblock": true, 00:10:47.970 "num_base_bdevs": 2, 00:10:47.970 "num_base_bdevs_discovered": 2, 00:10:47.970 "num_base_bdevs_operational": 2, 00:10:47.970 "base_bdevs_list": [ 00:10:47.970 { 00:10:47.970 "name": "BaseBdev1", 00:10:47.970 "uuid": "1a3ea235-75a6-5c35-9cd8-a6f817758e55", 00:10:47.970 "is_configured": true, 00:10:47.970 "data_offset": 2048, 00:10:47.970 "data_size": 63488 00:10:47.970 }, 00:10:47.970 { 00:10:47.970 "name": "BaseBdev2", 00:10:47.970 "uuid": "23b3e87b-9600-5bb7-ac9f-6d7c6384e2e0", 00:10:47.970 "is_configured": true, 00:10:47.970 "data_offset": 2048, 00:10:47.970 "data_size": 63488 00:10:47.970 } 00:10:47.970 ] 00:10:47.970 }' 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.970 06:20:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.541 06:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.541 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.541 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.541 [2024-11-26 06:20:32.405368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.541 [2024-11-26 06:20:32.405420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.541 [2024-11-26 06:20:32.408619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.542 [2024-11-26 06:20:32.408684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.542 [2024-11-26 06:20:32.408722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.542 [2024-11-26 06:20:32.408739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:48.542 { 00:10:48.542 "results": [ 00:10:48.542 { 00:10:48.542 "job": "raid_bdev1", 00:10:48.542 "core_mask": "0x1", 00:10:48.542 "workload": "randrw", 00:10:48.542 "percentage": 50, 00:10:48.542 "status": "finished", 00:10:48.542 "queue_depth": 1, 00:10:48.542 "io_size": 131072, 00:10:48.542 "runtime": 1.434413, 00:10:48.542 "iops": 13500.99308915912, 00:10:48.542 "mibps": 1687.62413614489, 00:10:48.542 "io_failed": 1, 00:10:48.542 "io_timeout": 0, 00:10:48.542 "avg_latency_us": 102.86499192905232, 00:10:48.542 "min_latency_us": 28.05938864628821, 00:10:48.542 "max_latency_us": 1638.4 00:10:48.542 } 00:10:48.542 ], 00:10:48.542 "core_count": 1 00:10:48.542 } 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62781 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62781 ']' 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62781 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62781 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62781' 00:10:48.542 killing process with pid 62781 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62781 00:10:48.542 [2024-11-26 06:20:32.460041] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.542 06:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62781 00:10:48.542 [2024-11-26 06:20:32.624188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kntzSvHn7w 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:49.924 00:10:49.924 real 0m4.762s 00:10:49.924 user 0m5.695s 00:10:49.924 sys 0m0.615s 00:10:49.924 ************************************ 00:10:49.924 END TEST raid_read_error_test 00:10:49.924 ************************************ 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.924 06:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.924 06:20:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:49.924 06:20:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:49.924 06:20:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.924 06:20:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.184 ************************************ 00:10:50.184 START TEST raid_write_error_test 00:10:50.184 ************************************ 00:10:50.184 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:10:50.184 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:50.184 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:50.184 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0rGSk3rrTD 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62931 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62931 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62931 ']' 00:10:50.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.185 06:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.185 [2024-11-26 06:20:34.174045] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:50.185 [2024-11-26 06:20:34.174305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62931 ] 00:10:50.445 [2024-11-26 06:20:34.355673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.445 [2024-11-26 06:20:34.495506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.709 [2024-11-26 06:20:34.743291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.709 [2024-11-26 06:20:34.743473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.306 BaseBdev1_malloc 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.306 true 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.306 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.306 [2024-11-26 06:20:35.194500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.306 [2024-11-26 06:20:35.194587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.306 [2024-11-26 06:20:35.194617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:51.306 [2024-11-26 06:20:35.194631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.306 [2024-11-26 06:20:35.197373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.306 [2024-11-26 06:20:35.197438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.307 BaseBdev1 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 BaseBdev2_malloc 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 true 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 [2024-11-26 06:20:35.268560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.307 [2024-11-26 06:20:35.268787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.307 [2024-11-26 06:20:35.268825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.307 [2024-11-26 06:20:35.268841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.307 [2024-11-26 06:20:35.271567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.307 [2024-11-26 06:20:35.271626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.307 BaseBdev2 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 [2024-11-26 06:20:35.280684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.307 [2024-11-26 06:20:35.282953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.307 [2024-11-26 06:20:35.283260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.307 [2024-11-26 06:20:35.283283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:51.307 [2024-11-26 06:20:35.283601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:51.307 [2024-11-26 06:20:35.283876] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.307 [2024-11-26 06:20:35.283893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:51.307 [2024-11-26 06:20:35.284125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.307 "name": "raid_bdev1", 00:10:51.307 "uuid": "64be9a11-420a-4ce1-afca-03c908802eaf", 00:10:51.307 "strip_size_kb": 64, 00:10:51.307 "state": "online", 00:10:51.307 "raid_level": "concat", 00:10:51.307 "superblock": true, 00:10:51.307 "num_base_bdevs": 2, 00:10:51.307 "num_base_bdevs_discovered": 2, 00:10:51.307 "num_base_bdevs_operational": 2, 00:10:51.307 "base_bdevs_list": [ 00:10:51.307 { 00:10:51.307 "name": "BaseBdev1", 00:10:51.307 "uuid": "72357806-6fd0-599b-abfb-cc4d54ec5e2c", 00:10:51.307 "is_configured": true, 00:10:51.307 "data_offset": 2048, 00:10:51.307 "data_size": 63488 00:10:51.307 }, 00:10:51.307 { 00:10:51.307 "name": "BaseBdev2", 00:10:51.307 "uuid": "1503bd58-edd6-5a06-9698-6d2657471864", 00:10:51.307 "is_configured": true, 00:10:51.307 "data_offset": 2048, 00:10:51.307 "data_size": 63488 00:10:51.307 } 00:10:51.307 ] 00:10:51.307 }' 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.307 06:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.886 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:51.886 06:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:51.886 [2024-11-26 06:20:35.913356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.822 "name": "raid_bdev1", 00:10:52.822 "uuid": "64be9a11-420a-4ce1-afca-03c908802eaf", 00:10:52.822 "strip_size_kb": 64, 00:10:52.822 "state": "online", 00:10:52.822 "raid_level": "concat", 00:10:52.822 "superblock": true, 00:10:52.822 "num_base_bdevs": 2, 00:10:52.822 "num_base_bdevs_discovered": 2, 00:10:52.822 "num_base_bdevs_operational": 2, 00:10:52.822 "base_bdevs_list": [ 00:10:52.822 { 00:10:52.822 "name": "BaseBdev1", 00:10:52.822 "uuid": "72357806-6fd0-599b-abfb-cc4d54ec5e2c", 00:10:52.822 "is_configured": true, 00:10:52.822 "data_offset": 2048, 00:10:52.822 "data_size": 63488 00:10:52.822 }, 00:10:52.822 { 00:10:52.822 "name": "BaseBdev2", 00:10:52.822 "uuid": "1503bd58-edd6-5a06-9698-6d2657471864", 00:10:52.822 "is_configured": true, 00:10:52.822 "data_offset": 2048, 00:10:52.822 "data_size": 63488 00:10:52.822 } 00:10:52.822 ] 00:10:52.822 }' 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.822 06:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.389 06:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.389 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.389 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.390 [2024-11-26 06:20:37.253951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.390 [2024-11-26 06:20:37.253997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.390 [2024-11-26 06:20:37.257149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.390 [2024-11-26 06:20:37.257244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.390 [2024-11-26 06:20:37.257300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.390 [2024-11-26 06:20:37.257369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.390 { 00:10:53.390 "results": [ 00:10:53.390 { 00:10:53.390 "job": "raid_bdev1", 00:10:53.390 "core_mask": "0x1", 00:10:53.390 "workload": "randrw", 00:10:53.390 "percentage": 50, 00:10:53.390 "status": "finished", 00:10:53.390 "queue_depth": 1, 00:10:53.390 "io_size": 131072, 00:10:53.390 "runtime": 1.340875, 00:10:53.390 "iops": 13780.553742891769, 00:10:53.390 "mibps": 1722.569217861471, 00:10:53.390 "io_failed": 1, 00:10:53.390 "io_timeout": 0, 00:10:53.390 "avg_latency_us": 100.67362517726366, 00:10:53.390 "min_latency_us": 26.494323144104804, 00:10:53.390 "max_latency_us": 1488.1537117903931 00:10:53.390 } 00:10:53.390 ], 00:10:53.390 "core_count": 1 00:10:53.390 } 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62931 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62931 ']' 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62931 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62931 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.390 killing process with pid 62931 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62931' 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62931 00:10:53.390 06:20:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62931 00:10:53.390 [2024-11-26 06:20:37.301574] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.390 [2024-11-26 06:20:37.453853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0rGSk3rrTD 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:54.765 ************************************ 00:10:54.765 END TEST raid_write_error_test 00:10:54.765 ************************************ 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:54.765 00:10:54.765 real 0m4.677s 00:10:54.765 user 0m5.689s 00:10:54.765 sys 0m0.587s 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.765 06:20:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.765 06:20:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:54.765 06:20:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:54.765 06:20:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:54.765 06:20:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.765 06:20:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.765 ************************************ 00:10:54.765 START TEST raid_state_function_test 00:10:54.765 ************************************ 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:54.765 Process raid pid: 63076 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63076 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63076' 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63076 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63076 ']' 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:54.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.765 06:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.024 [2024-11-26 06:20:38.902345] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:10:55.024 [2024-11-26 06:20:38.902598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.024 [2024-11-26 06:20:39.087636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.281 [2024-11-26 06:20:39.215242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.608 [2024-11-26 06:20:39.432856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.608 [2024-11-26 06:20:39.433006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.871 [2024-11-26 06:20:39.799977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.871 [2024-11-26 06:20:39.800169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.871 [2024-11-26 06:20:39.800225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.871 [2024-11-26 06:20:39.800292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.871 "name": "Existed_Raid", 00:10:55.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.871 "strip_size_kb": 0, 00:10:55.871 "state": "configuring", 00:10:55.871 "raid_level": "raid1", 00:10:55.871 "superblock": false, 00:10:55.871 "num_base_bdevs": 2, 00:10:55.871 "num_base_bdevs_discovered": 0, 00:10:55.871 "num_base_bdevs_operational": 2, 00:10:55.871 "base_bdevs_list": [ 00:10:55.871 { 00:10:55.871 "name": "BaseBdev1", 00:10:55.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.871 "is_configured": false, 00:10:55.871 "data_offset": 0, 00:10:55.871 "data_size": 0 00:10:55.871 }, 00:10:55.871 { 00:10:55.871 "name": "BaseBdev2", 00:10:55.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.871 "is_configured": false, 00:10:55.871 "data_offset": 0, 00:10:55.871 "data_size": 0 00:10:55.871 } 00:10:55.871 ] 00:10:55.871 }' 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.871 06:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.437 [2024-11-26 06:20:40.283404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.437 [2024-11-26 06:20:40.283547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.437 [2024-11-26 06:20:40.291366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.437 [2024-11-26 06:20:40.291419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.437 [2024-11-26 06:20:40.291431] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.437 [2024-11-26 06:20:40.291445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.437 [2024-11-26 06:20:40.341663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.437 BaseBdev1 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.437 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.437 [ 00:10:56.437 { 00:10:56.437 "name": "BaseBdev1", 00:10:56.437 "aliases": [ 00:10:56.437 "0169bb90-2024-43ba-b9bc-99435b872d78" 00:10:56.438 ], 00:10:56.438 "product_name": "Malloc disk", 00:10:56.438 "block_size": 512, 00:10:56.438 "num_blocks": 65536, 00:10:56.438 "uuid": "0169bb90-2024-43ba-b9bc-99435b872d78", 00:10:56.438 "assigned_rate_limits": { 00:10:56.438 "rw_ios_per_sec": 0, 00:10:56.438 "rw_mbytes_per_sec": 0, 00:10:56.438 "r_mbytes_per_sec": 0, 00:10:56.438 "w_mbytes_per_sec": 0 00:10:56.438 }, 00:10:56.438 "claimed": true, 00:10:56.438 "claim_type": "exclusive_write", 00:10:56.438 "zoned": false, 00:10:56.438 "supported_io_types": { 00:10:56.438 "read": true, 00:10:56.438 "write": true, 00:10:56.438 "unmap": true, 00:10:56.438 "flush": true, 00:10:56.438 "reset": true, 00:10:56.438 "nvme_admin": false, 00:10:56.438 "nvme_io": false, 00:10:56.438 "nvme_io_md": false, 00:10:56.438 "write_zeroes": true, 00:10:56.438 "zcopy": true, 00:10:56.438 "get_zone_info": false, 00:10:56.438 "zone_management": false, 00:10:56.438 "zone_append": false, 00:10:56.438 "compare": false, 00:10:56.438 "compare_and_write": false, 00:10:56.438 "abort": true, 00:10:56.438 "seek_hole": false, 00:10:56.438 "seek_data": false, 00:10:56.438 "copy": true, 00:10:56.438 "nvme_iov_md": false 00:10:56.438 }, 00:10:56.438 "memory_domains": [ 00:10:56.438 { 00:10:56.438 "dma_device_id": "system", 00:10:56.438 "dma_device_type": 1 00:10:56.438 }, 00:10:56.438 { 00:10:56.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.438 "dma_device_type": 2 00:10:56.438 } 00:10:56.438 ], 00:10:56.438 "driver_specific": {} 00:10:56.438 } 00:10:56.438 ] 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.438 "name": "Existed_Raid", 00:10:56.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.438 "strip_size_kb": 0, 00:10:56.438 "state": "configuring", 00:10:56.438 "raid_level": "raid1", 00:10:56.438 "superblock": false, 00:10:56.438 "num_base_bdevs": 2, 00:10:56.438 "num_base_bdevs_discovered": 1, 00:10:56.438 "num_base_bdevs_operational": 2, 00:10:56.438 "base_bdevs_list": [ 00:10:56.438 { 00:10:56.438 "name": "BaseBdev1", 00:10:56.438 "uuid": "0169bb90-2024-43ba-b9bc-99435b872d78", 00:10:56.438 "is_configured": true, 00:10:56.438 "data_offset": 0, 00:10:56.438 "data_size": 65536 00:10:56.438 }, 00:10:56.438 { 00:10:56.438 "name": "BaseBdev2", 00:10:56.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.438 "is_configured": false, 00:10:56.438 "data_offset": 0, 00:10:56.438 "data_size": 0 00:10:56.438 } 00:10:56.438 ] 00:10:56.438 }' 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.438 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.697 [2024-11-26 06:20:40.788973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.697 [2024-11-26 06:20:40.789180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.697 [2024-11-26 06:20:40.801044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.697 [2024-11-26 06:20:40.803393] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.697 [2024-11-26 06:20:40.803559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.697 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.956 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.956 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.956 "name": "Existed_Raid", 00:10:56.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.956 "strip_size_kb": 0, 00:10:56.956 "state": "configuring", 00:10:56.956 "raid_level": "raid1", 00:10:56.956 "superblock": false, 00:10:56.956 "num_base_bdevs": 2, 00:10:56.956 "num_base_bdevs_discovered": 1, 00:10:56.956 "num_base_bdevs_operational": 2, 00:10:56.956 "base_bdevs_list": [ 00:10:56.956 { 00:10:56.956 "name": "BaseBdev1", 00:10:56.956 "uuid": "0169bb90-2024-43ba-b9bc-99435b872d78", 00:10:56.956 "is_configured": true, 00:10:56.956 "data_offset": 0, 00:10:56.956 "data_size": 65536 00:10:56.956 }, 00:10:56.956 { 00:10:56.956 "name": "BaseBdev2", 00:10:56.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.956 "is_configured": false, 00:10:56.956 "data_offset": 0, 00:10:56.956 "data_size": 0 00:10:56.956 } 00:10:56.956 ] 00:10:56.956 }' 00:10:56.956 06:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.956 06:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.216 [2024-11-26 06:20:41.304308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.216 [2024-11-26 06:20:41.304482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:57.216 [2024-11-26 06:20:41.304576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:57.216 [2024-11-26 06:20:41.304963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:57.216 [2024-11-26 06:20:41.305243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:57.216 [2024-11-26 06:20:41.305321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:57.216 [2024-11-26 06:20:41.305691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.216 BaseBdev2 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.216 [ 00:10:57.216 { 00:10:57.216 "name": "BaseBdev2", 00:10:57.216 "aliases": [ 00:10:57.216 "93ff574f-5e60-4a08-b329-c5ea84cb77b6" 00:10:57.216 ], 00:10:57.216 "product_name": "Malloc disk", 00:10:57.216 "block_size": 512, 00:10:57.216 "num_blocks": 65536, 00:10:57.216 "uuid": "93ff574f-5e60-4a08-b329-c5ea84cb77b6", 00:10:57.216 "assigned_rate_limits": { 00:10:57.216 "rw_ios_per_sec": 0, 00:10:57.216 "rw_mbytes_per_sec": 0, 00:10:57.216 "r_mbytes_per_sec": 0, 00:10:57.216 "w_mbytes_per_sec": 0 00:10:57.216 }, 00:10:57.216 "claimed": true, 00:10:57.216 "claim_type": "exclusive_write", 00:10:57.216 "zoned": false, 00:10:57.216 "supported_io_types": { 00:10:57.216 "read": true, 00:10:57.216 "write": true, 00:10:57.216 "unmap": true, 00:10:57.216 "flush": true, 00:10:57.216 "reset": true, 00:10:57.216 "nvme_admin": false, 00:10:57.216 "nvme_io": false, 00:10:57.216 "nvme_io_md": false, 00:10:57.216 "write_zeroes": true, 00:10:57.216 "zcopy": true, 00:10:57.216 "get_zone_info": false, 00:10:57.216 "zone_management": false, 00:10:57.216 "zone_append": false, 00:10:57.216 "compare": false, 00:10:57.216 "compare_and_write": false, 00:10:57.216 "abort": true, 00:10:57.216 "seek_hole": false, 00:10:57.216 "seek_data": false, 00:10:57.216 "copy": true, 00:10:57.216 "nvme_iov_md": false 00:10:57.216 }, 00:10:57.216 "memory_domains": [ 00:10:57.216 { 00:10:57.216 "dma_device_id": "system", 00:10:57.216 "dma_device_type": 1 00:10:57.216 }, 00:10:57.216 { 00:10:57.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.216 "dma_device_type": 2 00:10:57.216 } 00:10:57.216 ], 00:10:57.216 "driver_specific": {} 00:10:57.216 } 00:10:57.216 ] 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.216 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.475 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.475 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.475 "name": "Existed_Raid", 00:10:57.475 "uuid": "230a9285-1fa7-4d6c-819b-4a6ce53a16d7", 00:10:57.475 "strip_size_kb": 0, 00:10:57.475 "state": "online", 00:10:57.475 "raid_level": "raid1", 00:10:57.475 "superblock": false, 00:10:57.475 "num_base_bdevs": 2, 00:10:57.475 "num_base_bdevs_discovered": 2, 00:10:57.475 "num_base_bdevs_operational": 2, 00:10:57.475 "base_bdevs_list": [ 00:10:57.475 { 00:10:57.475 "name": "BaseBdev1", 00:10:57.475 "uuid": "0169bb90-2024-43ba-b9bc-99435b872d78", 00:10:57.475 "is_configured": true, 00:10:57.475 "data_offset": 0, 00:10:57.475 "data_size": 65536 00:10:57.475 }, 00:10:57.475 { 00:10:57.475 "name": "BaseBdev2", 00:10:57.475 "uuid": "93ff574f-5e60-4a08-b329-c5ea84cb77b6", 00:10:57.475 "is_configured": true, 00:10:57.475 "data_offset": 0, 00:10:57.475 "data_size": 65536 00:10:57.475 } 00:10:57.475 ] 00:10:57.475 }' 00:10:57.475 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.475 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.735 [2024-11-26 06:20:41.812265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.735 "name": "Existed_Raid", 00:10:57.735 "aliases": [ 00:10:57.735 "230a9285-1fa7-4d6c-819b-4a6ce53a16d7" 00:10:57.735 ], 00:10:57.735 "product_name": "Raid Volume", 00:10:57.735 "block_size": 512, 00:10:57.735 "num_blocks": 65536, 00:10:57.735 "uuid": "230a9285-1fa7-4d6c-819b-4a6ce53a16d7", 00:10:57.735 "assigned_rate_limits": { 00:10:57.735 "rw_ios_per_sec": 0, 00:10:57.735 "rw_mbytes_per_sec": 0, 00:10:57.735 "r_mbytes_per_sec": 0, 00:10:57.735 "w_mbytes_per_sec": 0 00:10:57.735 }, 00:10:57.735 "claimed": false, 00:10:57.735 "zoned": false, 00:10:57.735 "supported_io_types": { 00:10:57.735 "read": true, 00:10:57.735 "write": true, 00:10:57.735 "unmap": false, 00:10:57.735 "flush": false, 00:10:57.735 "reset": true, 00:10:57.735 "nvme_admin": false, 00:10:57.735 "nvme_io": false, 00:10:57.735 "nvme_io_md": false, 00:10:57.735 "write_zeroes": true, 00:10:57.735 "zcopy": false, 00:10:57.735 "get_zone_info": false, 00:10:57.735 "zone_management": false, 00:10:57.735 "zone_append": false, 00:10:57.735 "compare": false, 00:10:57.735 "compare_and_write": false, 00:10:57.735 "abort": false, 00:10:57.735 "seek_hole": false, 00:10:57.735 "seek_data": false, 00:10:57.735 "copy": false, 00:10:57.735 "nvme_iov_md": false 00:10:57.735 }, 00:10:57.735 "memory_domains": [ 00:10:57.735 { 00:10:57.735 "dma_device_id": "system", 00:10:57.735 "dma_device_type": 1 00:10:57.735 }, 00:10:57.735 { 00:10:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.735 "dma_device_type": 2 00:10:57.735 }, 00:10:57.735 { 00:10:57.735 "dma_device_id": "system", 00:10:57.735 "dma_device_type": 1 00:10:57.735 }, 00:10:57.735 { 00:10:57.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.735 "dma_device_type": 2 00:10:57.735 } 00:10:57.735 ], 00:10:57.735 "driver_specific": { 00:10:57.735 "raid": { 00:10:57.735 "uuid": "230a9285-1fa7-4d6c-819b-4a6ce53a16d7", 00:10:57.735 "strip_size_kb": 0, 00:10:57.735 "state": "online", 00:10:57.735 "raid_level": "raid1", 00:10:57.735 "superblock": false, 00:10:57.735 "num_base_bdevs": 2, 00:10:57.735 "num_base_bdevs_discovered": 2, 00:10:57.735 "num_base_bdevs_operational": 2, 00:10:57.735 "base_bdevs_list": [ 00:10:57.735 { 00:10:57.735 "name": "BaseBdev1", 00:10:57.735 "uuid": "0169bb90-2024-43ba-b9bc-99435b872d78", 00:10:57.735 "is_configured": true, 00:10:57.735 "data_offset": 0, 00:10:57.735 "data_size": 65536 00:10:57.735 }, 00:10:57.735 { 00:10:57.735 "name": "BaseBdev2", 00:10:57.735 "uuid": "93ff574f-5e60-4a08-b329-c5ea84cb77b6", 00:10:57.735 "is_configured": true, 00:10:57.735 "data_offset": 0, 00:10:57.735 "data_size": 65536 00:10:57.735 } 00:10:57.735 ] 00:10:57.735 } 00:10:57.735 } 00:10:57.735 }' 00:10:57.735 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:57.996 BaseBdev2' 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.996 06:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.996 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.996 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.996 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.996 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:57.996 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.996 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.996 [2024-11-26 06:20:42.036002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.256 "name": "Existed_Raid", 00:10:58.256 "uuid": "230a9285-1fa7-4d6c-819b-4a6ce53a16d7", 00:10:58.256 "strip_size_kb": 0, 00:10:58.256 "state": "online", 00:10:58.256 "raid_level": "raid1", 00:10:58.256 "superblock": false, 00:10:58.256 "num_base_bdevs": 2, 00:10:58.256 "num_base_bdevs_discovered": 1, 00:10:58.256 "num_base_bdevs_operational": 1, 00:10:58.256 "base_bdevs_list": [ 00:10:58.256 { 00:10:58.256 "name": null, 00:10:58.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.256 "is_configured": false, 00:10:58.256 "data_offset": 0, 00:10:58.256 "data_size": 65536 00:10:58.256 }, 00:10:58.256 { 00:10:58.256 "name": "BaseBdev2", 00:10:58.256 "uuid": "93ff574f-5e60-4a08-b329-c5ea84cb77b6", 00:10:58.256 "is_configured": true, 00:10:58.256 "data_offset": 0, 00:10:58.256 "data_size": 65536 00:10:58.256 } 00:10:58.256 ] 00:10:58.256 }' 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.256 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.516 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.516 [2024-11-26 06:20:42.643415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.516 [2024-11-26 06:20:42.643541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.776 [2024-11-26 06:20:42.757224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.776 [2024-11-26 06:20:42.757291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.776 [2024-11-26 06:20:42.757307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63076 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63076 ']' 00:10:58.776 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63076 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63076 00:10:58.777 killing process with pid 63076 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63076' 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63076 00:10:58.777 06:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63076 00:10:58.777 [2024-11-26 06:20:42.843001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.777 [2024-11-26 06:20:42.863407] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.153 06:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:00.153 00:11:00.153 real 0m5.368s 00:11:00.153 user 0m7.639s 00:11:00.153 sys 0m0.868s 00:11:00.153 06:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.153 06:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.153 ************************************ 00:11:00.153 END TEST raid_state_function_test 00:11:00.154 ************************************ 00:11:00.154 06:20:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:00.154 06:20:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.154 06:20:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.154 06:20:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.154 ************************************ 00:11:00.154 START TEST raid_state_function_test_sb 00:11:00.154 ************************************ 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63329 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63329' 00:11:00.154 Process raid pid: 63329 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63329 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63329 ']' 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.154 06:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.412 [2024-11-26 06:20:44.353786] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:00.412 [2024-11-26 06:20:44.354066] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.412 [2024-11-26 06:20:44.543270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.670 [2024-11-26 06:20:44.697898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.928 [2024-11-26 06:20:44.978367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.928 [2024-11-26 06:20:44.978537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.186 [2024-11-26 06:20:45.268849] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.186 [2024-11-26 06:20:45.268931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.186 [2024-11-26 06:20:45.268945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.186 [2024-11-26 06:20:45.268958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.186 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.187 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.445 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.445 "name": "Existed_Raid", 00:11:01.445 "uuid": "b6dd727d-1ce8-4d2a-bd9e-2769fa30fbbb", 00:11:01.445 "strip_size_kb": 0, 00:11:01.445 "state": "configuring", 00:11:01.445 "raid_level": "raid1", 00:11:01.445 "superblock": true, 00:11:01.445 "num_base_bdevs": 2, 00:11:01.445 "num_base_bdevs_discovered": 0, 00:11:01.445 "num_base_bdevs_operational": 2, 00:11:01.445 "base_bdevs_list": [ 00:11:01.445 { 00:11:01.445 "name": "BaseBdev1", 00:11:01.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.445 "is_configured": false, 00:11:01.445 "data_offset": 0, 00:11:01.445 "data_size": 0 00:11:01.445 }, 00:11:01.445 { 00:11:01.445 "name": "BaseBdev2", 00:11:01.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.445 "is_configured": false, 00:11:01.445 "data_offset": 0, 00:11:01.445 "data_size": 0 00:11:01.445 } 00:11:01.445 ] 00:11:01.445 }' 00:11:01.445 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.445 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.704 [2024-11-26 06:20:45.763947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.704 [2024-11-26 06:20:45.764096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.704 [2024-11-26 06:20:45.775964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.704 [2024-11-26 06:20:45.776028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.704 [2024-11-26 06:20:45.776039] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.704 [2024-11-26 06:20:45.776069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.704 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 [2024-11-26 06:20:45.857283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.963 BaseBdev1 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 [ 00:11:01.963 { 00:11:01.963 "name": "BaseBdev1", 00:11:01.963 "aliases": [ 00:11:01.963 "494445da-bbb3-48ae-8dbf-b5dc7799e35d" 00:11:01.963 ], 00:11:01.963 "product_name": "Malloc disk", 00:11:01.963 "block_size": 512, 00:11:01.963 "num_blocks": 65536, 00:11:01.963 "uuid": "494445da-bbb3-48ae-8dbf-b5dc7799e35d", 00:11:01.963 "assigned_rate_limits": { 00:11:01.963 "rw_ios_per_sec": 0, 00:11:01.963 "rw_mbytes_per_sec": 0, 00:11:01.963 "r_mbytes_per_sec": 0, 00:11:01.963 "w_mbytes_per_sec": 0 00:11:01.963 }, 00:11:01.963 "claimed": true, 00:11:01.963 "claim_type": "exclusive_write", 00:11:01.963 "zoned": false, 00:11:01.963 "supported_io_types": { 00:11:01.963 "read": true, 00:11:01.963 "write": true, 00:11:01.963 "unmap": true, 00:11:01.963 "flush": true, 00:11:01.963 "reset": true, 00:11:01.963 "nvme_admin": false, 00:11:01.963 "nvme_io": false, 00:11:01.963 "nvme_io_md": false, 00:11:01.963 "write_zeroes": true, 00:11:01.963 "zcopy": true, 00:11:01.963 "get_zone_info": false, 00:11:01.963 "zone_management": false, 00:11:01.963 "zone_append": false, 00:11:01.963 "compare": false, 00:11:01.963 "compare_and_write": false, 00:11:01.963 "abort": true, 00:11:01.963 "seek_hole": false, 00:11:01.963 "seek_data": false, 00:11:01.963 "copy": true, 00:11:01.963 "nvme_iov_md": false 00:11:01.963 }, 00:11:01.963 "memory_domains": [ 00:11:01.963 { 00:11:01.963 "dma_device_id": "system", 00:11:01.963 "dma_device_type": 1 00:11:01.963 }, 00:11:01.963 { 00:11:01.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.963 "dma_device_type": 2 00:11:01.963 } 00:11:01.963 ], 00:11:01.963 "driver_specific": {} 00:11:01.963 } 00:11:01.963 ] 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.963 "name": "Existed_Raid", 00:11:01.963 "uuid": "fc59138d-a60b-4645-9d04-ce1f2d29dc8e", 00:11:01.963 "strip_size_kb": 0, 00:11:01.963 "state": "configuring", 00:11:01.963 "raid_level": "raid1", 00:11:01.963 "superblock": true, 00:11:01.963 "num_base_bdevs": 2, 00:11:01.963 "num_base_bdevs_discovered": 1, 00:11:01.963 "num_base_bdevs_operational": 2, 00:11:01.963 "base_bdevs_list": [ 00:11:01.963 { 00:11:01.963 "name": "BaseBdev1", 00:11:01.963 "uuid": "494445da-bbb3-48ae-8dbf-b5dc7799e35d", 00:11:01.963 "is_configured": true, 00:11:01.963 "data_offset": 2048, 00:11:01.963 "data_size": 63488 00:11:01.963 }, 00:11:01.963 { 00:11:01.963 "name": "BaseBdev2", 00:11:01.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.963 "is_configured": false, 00:11:01.963 "data_offset": 0, 00:11:01.963 "data_size": 0 00:11:01.963 } 00:11:01.963 ] 00:11:01.963 }' 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.963 06:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 [2024-11-26 06:20:46.376489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.530 [2024-11-26 06:20:46.376643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 [2024-11-26 06:20:46.384518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.530 [2024-11-26 06:20:46.386970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.530 [2024-11-26 06:20:46.387023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.530 "name": "Existed_Raid", 00:11:02.530 "uuid": "7a044360-6730-4a39-80e4-5646e9e9e528", 00:11:02.530 "strip_size_kb": 0, 00:11:02.530 "state": "configuring", 00:11:02.530 "raid_level": "raid1", 00:11:02.530 "superblock": true, 00:11:02.530 "num_base_bdevs": 2, 00:11:02.530 "num_base_bdevs_discovered": 1, 00:11:02.530 "num_base_bdevs_operational": 2, 00:11:02.530 "base_bdevs_list": [ 00:11:02.530 { 00:11:02.530 "name": "BaseBdev1", 00:11:02.530 "uuid": "494445da-bbb3-48ae-8dbf-b5dc7799e35d", 00:11:02.530 "is_configured": true, 00:11:02.530 "data_offset": 2048, 00:11:02.530 "data_size": 63488 00:11:02.530 }, 00:11:02.530 { 00:11:02.530 "name": "BaseBdev2", 00:11:02.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.530 "is_configured": false, 00:11:02.530 "data_offset": 0, 00:11:02.530 "data_size": 0 00:11:02.530 } 00:11:02.530 ] 00:11:02.530 }' 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.530 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.854 [2024-11-26 06:20:46.881596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.854 [2024-11-26 06:20:46.882034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:02.854 [2024-11-26 06:20:46.882106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:02.854 [2024-11-26 06:20:46.882459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:02.854 BaseBdev2 00:11:02.854 [2024-11-26 06:20:46.882680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:02.854 [2024-11-26 06:20:46.882733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:02.854 [2024-11-26 06:20:46.882953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.854 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.855 [ 00:11:02.855 { 00:11:02.855 "name": "BaseBdev2", 00:11:02.855 "aliases": [ 00:11:02.855 "3f967148-1533-40c6-add9-fe7729e4571c" 00:11:02.855 ], 00:11:02.855 "product_name": "Malloc disk", 00:11:02.855 "block_size": 512, 00:11:02.855 "num_blocks": 65536, 00:11:02.855 "uuid": "3f967148-1533-40c6-add9-fe7729e4571c", 00:11:02.855 "assigned_rate_limits": { 00:11:02.855 "rw_ios_per_sec": 0, 00:11:02.855 "rw_mbytes_per_sec": 0, 00:11:02.855 "r_mbytes_per_sec": 0, 00:11:02.855 "w_mbytes_per_sec": 0 00:11:02.855 }, 00:11:02.855 "claimed": true, 00:11:02.855 "claim_type": "exclusive_write", 00:11:02.855 "zoned": false, 00:11:02.855 "supported_io_types": { 00:11:02.855 "read": true, 00:11:02.855 "write": true, 00:11:02.855 "unmap": true, 00:11:02.855 "flush": true, 00:11:02.855 "reset": true, 00:11:02.855 "nvme_admin": false, 00:11:02.855 "nvme_io": false, 00:11:02.855 "nvme_io_md": false, 00:11:02.855 "write_zeroes": true, 00:11:02.855 "zcopy": true, 00:11:02.855 "get_zone_info": false, 00:11:02.855 "zone_management": false, 00:11:02.855 "zone_append": false, 00:11:02.855 "compare": false, 00:11:02.855 "compare_and_write": false, 00:11:02.855 "abort": true, 00:11:02.855 "seek_hole": false, 00:11:02.855 "seek_data": false, 00:11:02.855 "copy": true, 00:11:02.855 "nvme_iov_md": false 00:11:02.855 }, 00:11:02.855 "memory_domains": [ 00:11:02.855 { 00:11:02.855 "dma_device_id": "system", 00:11:02.855 "dma_device_type": 1 00:11:02.855 }, 00:11:02.855 { 00:11:02.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.855 "dma_device_type": 2 00:11:02.855 } 00:11:02.855 ], 00:11:02.855 "driver_specific": {} 00:11:02.855 } 00:11:02.855 ] 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.855 "name": "Existed_Raid", 00:11:02.855 "uuid": "7a044360-6730-4a39-80e4-5646e9e9e528", 00:11:02.855 "strip_size_kb": 0, 00:11:02.855 "state": "online", 00:11:02.855 "raid_level": "raid1", 00:11:02.855 "superblock": true, 00:11:02.855 "num_base_bdevs": 2, 00:11:02.855 "num_base_bdevs_discovered": 2, 00:11:02.855 "num_base_bdevs_operational": 2, 00:11:02.855 "base_bdevs_list": [ 00:11:02.855 { 00:11:02.855 "name": "BaseBdev1", 00:11:02.855 "uuid": "494445da-bbb3-48ae-8dbf-b5dc7799e35d", 00:11:02.855 "is_configured": true, 00:11:02.855 "data_offset": 2048, 00:11:02.855 "data_size": 63488 00:11:02.855 }, 00:11:02.855 { 00:11:02.855 "name": "BaseBdev2", 00:11:02.855 "uuid": "3f967148-1533-40c6-add9-fe7729e4571c", 00:11:02.855 "is_configured": true, 00:11:02.855 "data_offset": 2048, 00:11:02.855 "data_size": 63488 00:11:02.855 } 00:11:02.855 ] 00:11:02.855 }' 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.855 06:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.423 [2024-11-26 06:20:47.401234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.423 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.423 "name": "Existed_Raid", 00:11:03.423 "aliases": [ 00:11:03.423 "7a044360-6730-4a39-80e4-5646e9e9e528" 00:11:03.423 ], 00:11:03.423 "product_name": "Raid Volume", 00:11:03.423 "block_size": 512, 00:11:03.423 "num_blocks": 63488, 00:11:03.423 "uuid": "7a044360-6730-4a39-80e4-5646e9e9e528", 00:11:03.423 "assigned_rate_limits": { 00:11:03.423 "rw_ios_per_sec": 0, 00:11:03.423 "rw_mbytes_per_sec": 0, 00:11:03.423 "r_mbytes_per_sec": 0, 00:11:03.423 "w_mbytes_per_sec": 0 00:11:03.423 }, 00:11:03.423 "claimed": false, 00:11:03.423 "zoned": false, 00:11:03.423 "supported_io_types": { 00:11:03.423 "read": true, 00:11:03.423 "write": true, 00:11:03.423 "unmap": false, 00:11:03.423 "flush": false, 00:11:03.423 "reset": true, 00:11:03.423 "nvme_admin": false, 00:11:03.423 "nvme_io": false, 00:11:03.423 "nvme_io_md": false, 00:11:03.423 "write_zeroes": true, 00:11:03.423 "zcopy": false, 00:11:03.423 "get_zone_info": false, 00:11:03.423 "zone_management": false, 00:11:03.423 "zone_append": false, 00:11:03.423 "compare": false, 00:11:03.423 "compare_and_write": false, 00:11:03.423 "abort": false, 00:11:03.423 "seek_hole": false, 00:11:03.423 "seek_data": false, 00:11:03.423 "copy": false, 00:11:03.423 "nvme_iov_md": false 00:11:03.423 }, 00:11:03.423 "memory_domains": [ 00:11:03.423 { 00:11:03.423 "dma_device_id": "system", 00:11:03.423 "dma_device_type": 1 00:11:03.423 }, 00:11:03.423 { 00:11:03.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.423 "dma_device_type": 2 00:11:03.423 }, 00:11:03.423 { 00:11:03.423 "dma_device_id": "system", 00:11:03.423 "dma_device_type": 1 00:11:03.423 }, 00:11:03.423 { 00:11:03.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.423 "dma_device_type": 2 00:11:03.423 } 00:11:03.423 ], 00:11:03.423 "driver_specific": { 00:11:03.423 "raid": { 00:11:03.423 "uuid": "7a044360-6730-4a39-80e4-5646e9e9e528", 00:11:03.423 "strip_size_kb": 0, 00:11:03.423 "state": "online", 00:11:03.423 "raid_level": "raid1", 00:11:03.423 "superblock": true, 00:11:03.423 "num_base_bdevs": 2, 00:11:03.423 "num_base_bdevs_discovered": 2, 00:11:03.423 "num_base_bdevs_operational": 2, 00:11:03.423 "base_bdevs_list": [ 00:11:03.423 { 00:11:03.423 "name": "BaseBdev1", 00:11:03.423 "uuid": "494445da-bbb3-48ae-8dbf-b5dc7799e35d", 00:11:03.424 "is_configured": true, 00:11:03.424 "data_offset": 2048, 00:11:03.424 "data_size": 63488 00:11:03.424 }, 00:11:03.424 { 00:11:03.424 "name": "BaseBdev2", 00:11:03.424 "uuid": "3f967148-1533-40c6-add9-fe7729e4571c", 00:11:03.424 "is_configured": true, 00:11:03.424 "data_offset": 2048, 00:11:03.424 "data_size": 63488 00:11:03.424 } 00:11:03.424 ] 00:11:03.424 } 00:11:03.424 } 00:11:03.424 }' 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:03.424 BaseBdev2' 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.424 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.681 [2024-11-26 06:20:47.620564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.681 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.682 "name": "Existed_Raid", 00:11:03.682 "uuid": "7a044360-6730-4a39-80e4-5646e9e9e528", 00:11:03.682 "strip_size_kb": 0, 00:11:03.682 "state": "online", 00:11:03.682 "raid_level": "raid1", 00:11:03.682 "superblock": true, 00:11:03.682 "num_base_bdevs": 2, 00:11:03.682 "num_base_bdevs_discovered": 1, 00:11:03.682 "num_base_bdevs_operational": 1, 00:11:03.682 "base_bdevs_list": [ 00:11:03.682 { 00:11:03.682 "name": null, 00:11:03.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.682 "is_configured": false, 00:11:03.682 "data_offset": 0, 00:11:03.682 "data_size": 63488 00:11:03.682 }, 00:11:03.682 { 00:11:03.682 "name": "BaseBdev2", 00:11:03.682 "uuid": "3f967148-1533-40c6-add9-fe7729e4571c", 00:11:03.682 "is_configured": true, 00:11:03.682 "data_offset": 2048, 00:11:03.682 "data_size": 63488 00:11:03.682 } 00:11:03.682 ] 00:11:03.682 }' 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.682 06:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.248 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.248 [2024-11-26 06:20:48.275871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:04.248 [2024-11-26 06:20:48.276121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.507 [2024-11-26 06:20:48.394392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.507 [2024-11-26 06:20:48.394481] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.507 [2024-11-26 06:20:48.394498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63329 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63329 ']' 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63329 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63329 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.507 killing process with pid 63329 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63329' 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63329 00:11:04.507 [2024-11-26 06:20:48.490031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.507 06:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63329 00:11:04.507 [2024-11-26 06:20:48.512636] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.881 06:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:05.881 00:11:05.881 real 0m5.591s 00:11:05.881 user 0m7.871s 00:11:05.881 sys 0m1.041s 00:11:05.881 06:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.881 ************************************ 00:11:05.881 END TEST raid_state_function_test_sb 00:11:05.881 06:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.881 ************************************ 00:11:05.881 06:20:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:05.881 06:20:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:05.881 06:20:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.881 06:20:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.881 ************************************ 00:11:05.881 START TEST raid_superblock_test 00:11:05.881 ************************************ 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:05.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63581 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63581 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63581 ']' 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.881 06:20:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.881 [2024-11-26 06:20:49.996859] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:05.881 [2024-11-26 06:20:49.997016] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63581 ] 00:11:06.140 [2024-11-26 06:20:50.180289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.398 [2024-11-26 06:20:50.332798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.656 [2024-11-26 06:20:50.597324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.656 [2024-11-26 06:20:50.597438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.915 malloc1 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.915 [2024-11-26 06:20:50.950634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:06.915 [2024-11-26 06:20:50.950794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.915 [2024-11-26 06:20:50.950872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:06.915 [2024-11-26 06:20:50.950933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.915 [2024-11-26 06:20:50.954168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.915 [2024-11-26 06:20:50.954292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:06.915 pt1 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.915 06:20:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.915 malloc2 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.915 [2024-11-26 06:20:51.022004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.915 [2024-11-26 06:20:51.022187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.915 [2024-11-26 06:20:51.022240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:06.915 [2024-11-26 06:20:51.022277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.915 [2024-11-26 06:20:51.024994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.915 [2024-11-26 06:20:51.025107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.915 pt2 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.915 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.915 [2024-11-26 06:20:51.034127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:06.915 [2024-11-26 06:20:51.036597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.915 [2024-11-26 06:20:51.036891] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:06.915 [2024-11-26 06:20:51.036954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:06.915 [2024-11-26 06:20:51.037392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:06.916 [2024-11-26 06:20:51.037679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:06.916 [2024-11-26 06:20:51.037747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:06.916 [2024-11-26 06:20:51.038117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.916 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.175 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.175 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.175 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.175 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.175 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.175 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.175 "name": "raid_bdev1", 00:11:07.175 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:07.175 "strip_size_kb": 0, 00:11:07.175 "state": "online", 00:11:07.175 "raid_level": "raid1", 00:11:07.175 "superblock": true, 00:11:07.175 "num_base_bdevs": 2, 00:11:07.175 "num_base_bdevs_discovered": 2, 00:11:07.175 "num_base_bdevs_operational": 2, 00:11:07.176 "base_bdevs_list": [ 00:11:07.176 { 00:11:07.176 "name": "pt1", 00:11:07.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.176 "is_configured": true, 00:11:07.176 "data_offset": 2048, 00:11:07.176 "data_size": 63488 00:11:07.176 }, 00:11:07.176 { 00:11:07.176 "name": "pt2", 00:11:07.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.176 "is_configured": true, 00:11:07.176 "data_offset": 2048, 00:11:07.176 "data_size": 63488 00:11:07.176 } 00:11:07.176 ] 00:11:07.176 }' 00:11:07.176 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.176 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.436 [2024-11-26 06:20:51.541694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.436 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.696 "name": "raid_bdev1", 00:11:07.696 "aliases": [ 00:11:07.696 "17863236-da05-4d0a-8b0a-105f7e545f3a" 00:11:07.696 ], 00:11:07.696 "product_name": "Raid Volume", 00:11:07.696 "block_size": 512, 00:11:07.696 "num_blocks": 63488, 00:11:07.696 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:07.696 "assigned_rate_limits": { 00:11:07.696 "rw_ios_per_sec": 0, 00:11:07.696 "rw_mbytes_per_sec": 0, 00:11:07.696 "r_mbytes_per_sec": 0, 00:11:07.696 "w_mbytes_per_sec": 0 00:11:07.696 }, 00:11:07.696 "claimed": false, 00:11:07.696 "zoned": false, 00:11:07.696 "supported_io_types": { 00:11:07.696 "read": true, 00:11:07.696 "write": true, 00:11:07.696 "unmap": false, 00:11:07.696 "flush": false, 00:11:07.696 "reset": true, 00:11:07.696 "nvme_admin": false, 00:11:07.696 "nvme_io": false, 00:11:07.696 "nvme_io_md": false, 00:11:07.696 "write_zeroes": true, 00:11:07.696 "zcopy": false, 00:11:07.696 "get_zone_info": false, 00:11:07.696 "zone_management": false, 00:11:07.696 "zone_append": false, 00:11:07.696 "compare": false, 00:11:07.696 "compare_and_write": false, 00:11:07.696 "abort": false, 00:11:07.696 "seek_hole": false, 00:11:07.696 "seek_data": false, 00:11:07.696 "copy": false, 00:11:07.696 "nvme_iov_md": false 00:11:07.696 }, 00:11:07.696 "memory_domains": [ 00:11:07.696 { 00:11:07.696 "dma_device_id": "system", 00:11:07.696 "dma_device_type": 1 00:11:07.696 }, 00:11:07.696 { 00:11:07.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.696 "dma_device_type": 2 00:11:07.696 }, 00:11:07.696 { 00:11:07.696 "dma_device_id": "system", 00:11:07.696 "dma_device_type": 1 00:11:07.696 }, 00:11:07.696 { 00:11:07.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.696 "dma_device_type": 2 00:11:07.696 } 00:11:07.696 ], 00:11:07.696 "driver_specific": { 00:11:07.696 "raid": { 00:11:07.696 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:07.696 "strip_size_kb": 0, 00:11:07.696 "state": "online", 00:11:07.696 "raid_level": "raid1", 00:11:07.696 "superblock": true, 00:11:07.696 "num_base_bdevs": 2, 00:11:07.696 "num_base_bdevs_discovered": 2, 00:11:07.696 "num_base_bdevs_operational": 2, 00:11:07.696 "base_bdevs_list": [ 00:11:07.696 { 00:11:07.696 "name": "pt1", 00:11:07.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.696 "is_configured": true, 00:11:07.696 "data_offset": 2048, 00:11:07.696 "data_size": 63488 00:11:07.696 }, 00:11:07.696 { 00:11:07.696 "name": "pt2", 00:11:07.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.696 "is_configured": true, 00:11:07.696 "data_offset": 2048, 00:11:07.696 "data_size": 63488 00:11:07.696 } 00:11:07.696 ] 00:11:07.696 } 00:11:07.696 } 00:11:07.696 }' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:07.696 pt2' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:07.696 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.697 [2024-11-26 06:20:51.785338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=17863236-da05-4d0a-8b0a-105f7e545f3a 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 17863236-da05-4d0a-8b0a-105f7e545f3a ']' 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.697 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 [2024-11-26 06:20:51.828881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.957 [2024-11-26 06:20:51.828937] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.957 [2024-11-26 06:20:51.829060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.957 [2024-11-26 06:20:51.829162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.957 [2024-11-26 06:20:51.829182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.957 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.957 [2024-11-26 06:20:51.956747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:07.957 [2024-11-26 06:20:51.959351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:07.957 [2024-11-26 06:20:51.959446] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:07.957 [2024-11-26 06:20:51.959520] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:07.957 [2024-11-26 06:20:51.959539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.957 [2024-11-26 06:20:51.959551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:07.957 request: 00:11:07.957 { 00:11:07.957 "name": "raid_bdev1", 00:11:07.957 "raid_level": "raid1", 00:11:07.957 "base_bdevs": [ 00:11:07.957 "malloc1", 00:11:07.957 "malloc2" 00:11:07.957 ], 00:11:07.957 "superblock": false, 00:11:07.957 "method": "bdev_raid_create", 00:11:07.957 "req_id": 1 00:11:07.957 } 00:11:07.957 Got JSON-RPC error response 00:11:07.957 response: 00:11:07.957 { 00:11:07.957 "code": -17, 00:11:07.958 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:07.958 } 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.958 06:20:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.958 [2024-11-26 06:20:52.024580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:07.958 [2024-11-26 06:20:52.024672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.958 [2024-11-26 06:20:52.024698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:07.958 [2024-11-26 06:20:52.024712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.958 [2024-11-26 06:20:52.027785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.958 [2024-11-26 06:20:52.027924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:07.958 [2024-11-26 06:20:52.028170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:07.958 [2024-11-26 06:20:52.028297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:07.958 pt1 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.958 "name": "raid_bdev1", 00:11:07.958 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:07.958 "strip_size_kb": 0, 00:11:07.958 "state": "configuring", 00:11:07.958 "raid_level": "raid1", 00:11:07.958 "superblock": true, 00:11:07.958 "num_base_bdevs": 2, 00:11:07.958 "num_base_bdevs_discovered": 1, 00:11:07.958 "num_base_bdevs_operational": 2, 00:11:07.958 "base_bdevs_list": [ 00:11:07.958 { 00:11:07.958 "name": "pt1", 00:11:07.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.958 "is_configured": true, 00:11:07.958 "data_offset": 2048, 00:11:07.958 "data_size": 63488 00:11:07.958 }, 00:11:07.958 { 00:11:07.958 "name": null, 00:11:07.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.958 "is_configured": false, 00:11:07.958 "data_offset": 2048, 00:11:07.958 "data_size": 63488 00:11:07.958 } 00:11:07.958 ] 00:11:07.958 }' 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.958 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.528 [2024-11-26 06:20:52.527862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.528 [2024-11-26 06:20:52.528083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.528 [2024-11-26 06:20:52.528157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:08.528 [2024-11-26 06:20:52.528208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.528 [2024-11-26 06:20:52.528891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.528 [2024-11-26 06:20:52.528983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.528 [2024-11-26 06:20:52.529163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:08.528 [2024-11-26 06:20:52.529242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.528 [2024-11-26 06:20:52.529441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.528 [2024-11-26 06:20:52.529504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:08.528 [2024-11-26 06:20:52.529861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:08.528 [2024-11-26 06:20:52.530134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.528 [2024-11-26 06:20:52.530184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:08.528 [2024-11-26 06:20:52.530451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.528 pt2 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.528 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.528 "name": "raid_bdev1", 00:11:08.528 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:08.528 "strip_size_kb": 0, 00:11:08.528 "state": "online", 00:11:08.528 "raid_level": "raid1", 00:11:08.528 "superblock": true, 00:11:08.528 "num_base_bdevs": 2, 00:11:08.528 "num_base_bdevs_discovered": 2, 00:11:08.528 "num_base_bdevs_operational": 2, 00:11:08.528 "base_bdevs_list": [ 00:11:08.528 { 00:11:08.528 "name": "pt1", 00:11:08.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.528 "is_configured": true, 00:11:08.528 "data_offset": 2048, 00:11:08.528 "data_size": 63488 00:11:08.529 }, 00:11:08.529 { 00:11:08.529 "name": "pt2", 00:11:08.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.529 "is_configured": true, 00:11:08.529 "data_offset": 2048, 00:11:08.529 "data_size": 63488 00:11:08.529 } 00:11:08.529 ] 00:11:08.529 }' 00:11:08.529 06:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.529 06:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.098 [2024-11-26 06:20:53.039304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.098 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.098 "name": "raid_bdev1", 00:11:09.098 "aliases": [ 00:11:09.098 "17863236-da05-4d0a-8b0a-105f7e545f3a" 00:11:09.098 ], 00:11:09.098 "product_name": "Raid Volume", 00:11:09.098 "block_size": 512, 00:11:09.098 "num_blocks": 63488, 00:11:09.098 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:09.098 "assigned_rate_limits": { 00:11:09.098 "rw_ios_per_sec": 0, 00:11:09.098 "rw_mbytes_per_sec": 0, 00:11:09.098 "r_mbytes_per_sec": 0, 00:11:09.098 "w_mbytes_per_sec": 0 00:11:09.098 }, 00:11:09.098 "claimed": false, 00:11:09.098 "zoned": false, 00:11:09.098 "supported_io_types": { 00:11:09.098 "read": true, 00:11:09.098 "write": true, 00:11:09.098 "unmap": false, 00:11:09.098 "flush": false, 00:11:09.098 "reset": true, 00:11:09.098 "nvme_admin": false, 00:11:09.098 "nvme_io": false, 00:11:09.098 "nvme_io_md": false, 00:11:09.098 "write_zeroes": true, 00:11:09.098 "zcopy": false, 00:11:09.098 "get_zone_info": false, 00:11:09.098 "zone_management": false, 00:11:09.098 "zone_append": false, 00:11:09.098 "compare": false, 00:11:09.098 "compare_and_write": false, 00:11:09.098 "abort": false, 00:11:09.098 "seek_hole": false, 00:11:09.098 "seek_data": false, 00:11:09.098 "copy": false, 00:11:09.098 "nvme_iov_md": false 00:11:09.098 }, 00:11:09.098 "memory_domains": [ 00:11:09.098 { 00:11:09.098 "dma_device_id": "system", 00:11:09.098 "dma_device_type": 1 00:11:09.098 }, 00:11:09.098 { 00:11:09.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.098 "dma_device_type": 2 00:11:09.098 }, 00:11:09.098 { 00:11:09.098 "dma_device_id": "system", 00:11:09.098 "dma_device_type": 1 00:11:09.098 }, 00:11:09.098 { 00:11:09.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.098 "dma_device_type": 2 00:11:09.098 } 00:11:09.098 ], 00:11:09.098 "driver_specific": { 00:11:09.098 "raid": { 00:11:09.098 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:09.098 "strip_size_kb": 0, 00:11:09.098 "state": "online", 00:11:09.098 "raid_level": "raid1", 00:11:09.098 "superblock": true, 00:11:09.098 "num_base_bdevs": 2, 00:11:09.098 "num_base_bdevs_discovered": 2, 00:11:09.098 "num_base_bdevs_operational": 2, 00:11:09.098 "base_bdevs_list": [ 00:11:09.098 { 00:11:09.098 "name": "pt1", 00:11:09.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.098 "is_configured": true, 00:11:09.098 "data_offset": 2048, 00:11:09.098 "data_size": 63488 00:11:09.098 }, 00:11:09.098 { 00:11:09.098 "name": "pt2", 00:11:09.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.098 "is_configured": true, 00:11:09.098 "data_offset": 2048, 00:11:09.098 "data_size": 63488 00:11:09.098 } 00:11:09.098 ] 00:11:09.098 } 00:11:09.098 } 00:11:09.098 }' 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.099 pt2' 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.099 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.357 [2024-11-26 06:20:53.294840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 17863236-da05-4d0a-8b0a-105f7e545f3a '!=' 17863236-da05-4d0a-8b0a-105f7e545f3a ']' 00:11:09.357 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.358 [2024-11-26 06:20:53.338511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.358 "name": "raid_bdev1", 00:11:09.358 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:09.358 "strip_size_kb": 0, 00:11:09.358 "state": "online", 00:11:09.358 "raid_level": "raid1", 00:11:09.358 "superblock": true, 00:11:09.358 "num_base_bdevs": 2, 00:11:09.358 "num_base_bdevs_discovered": 1, 00:11:09.358 "num_base_bdevs_operational": 1, 00:11:09.358 "base_bdevs_list": [ 00:11:09.358 { 00:11:09.358 "name": null, 00:11:09.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.358 "is_configured": false, 00:11:09.358 "data_offset": 0, 00:11:09.358 "data_size": 63488 00:11:09.358 }, 00:11:09.358 { 00:11:09.358 "name": "pt2", 00:11:09.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.358 "is_configured": true, 00:11:09.358 "data_offset": 2048, 00:11:09.358 "data_size": 63488 00:11:09.358 } 00:11:09.358 ] 00:11:09.358 }' 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.358 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 [2024-11-26 06:20:53.781837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.929 [2024-11-26 06:20:53.781954] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.929 [2024-11-26 06:20:53.782150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.929 [2024-11-26 06:20:53.782263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.929 [2024-11-26 06:20:53.782320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 [2024-11-26 06:20:53.857663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:09.929 [2024-11-26 06:20:53.857754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.929 [2024-11-26 06:20:53.857779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:09.929 [2024-11-26 06:20:53.857791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.929 [2024-11-26 06:20:53.860699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.929 [2024-11-26 06:20:53.860807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:09.929 [2024-11-26 06:20:53.860954] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:09.929 [2024-11-26 06:20:53.861019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:09.929 [2024-11-26 06:20:53.861189] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.929 [2024-11-26 06:20:53.861206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:09.929 [2024-11-26 06:20:53.861479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:09.929 [2024-11-26 06:20:53.861668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.929 [2024-11-26 06:20:53.861679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:09.929 [2024-11-26 06:20:53.861890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.929 pt2 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.929 "name": "raid_bdev1", 00:11:09.929 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:09.929 "strip_size_kb": 0, 00:11:09.929 "state": "online", 00:11:09.929 "raid_level": "raid1", 00:11:09.929 "superblock": true, 00:11:09.929 "num_base_bdevs": 2, 00:11:09.929 "num_base_bdevs_discovered": 1, 00:11:09.929 "num_base_bdevs_operational": 1, 00:11:09.929 "base_bdevs_list": [ 00:11:09.929 { 00:11:09.929 "name": null, 00:11:09.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.929 "is_configured": false, 00:11:09.929 "data_offset": 2048, 00:11:09.929 "data_size": 63488 00:11:09.929 }, 00:11:09.929 { 00:11:09.929 "name": "pt2", 00:11:09.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.929 "is_configured": true, 00:11:09.929 "data_offset": 2048, 00:11:09.929 "data_size": 63488 00:11:09.929 } 00:11:09.929 ] 00:11:09.929 }' 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.929 06:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.193 [2024-11-26 06:20:54.301111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.193 [2024-11-26 06:20:54.301217] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.193 [2024-11-26 06:20:54.301341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.193 [2024-11-26 06:20:54.301473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.193 [2024-11-26 06:20:54.301527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.193 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.453 [2024-11-26 06:20:54.365028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.453 [2024-11-26 06:20:54.365198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.453 [2024-11-26 06:20:54.365244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:10.453 [2024-11-26 06:20:54.365281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.453 [2024-11-26 06:20:54.368216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.453 [2024-11-26 06:20:54.368296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.453 [2024-11-26 06:20:54.368446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:10.453 [2024-11-26 06:20:54.368547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:10.453 [2024-11-26 06:20:54.368769] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:10.453 [2024-11-26 06:20:54.368829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.453 [2024-11-26 06:20:54.368927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:10.453 [2024-11-26 06:20:54.369082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.453 [2024-11-26 06:20:54.369235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:10.453 [2024-11-26 06:20:54.369277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:10.453 [2024-11-26 06:20:54.369604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:10.453 [2024-11-26 06:20:54.369833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:10.453 [2024-11-26 06:20:54.369886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:10.453 [2024-11-26 06:20:54.370185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.453 pt1 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.453 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.454 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.454 "name": "raid_bdev1", 00:11:10.454 "uuid": "17863236-da05-4d0a-8b0a-105f7e545f3a", 00:11:10.454 "strip_size_kb": 0, 00:11:10.454 "state": "online", 00:11:10.454 "raid_level": "raid1", 00:11:10.454 "superblock": true, 00:11:10.454 "num_base_bdevs": 2, 00:11:10.454 "num_base_bdevs_discovered": 1, 00:11:10.454 "num_base_bdevs_operational": 1, 00:11:10.454 "base_bdevs_list": [ 00:11:10.454 { 00:11:10.454 "name": null, 00:11:10.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.454 "is_configured": false, 00:11:10.454 "data_offset": 2048, 00:11:10.454 "data_size": 63488 00:11:10.454 }, 00:11:10.454 { 00:11:10.454 "name": "pt2", 00:11:10.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.454 "is_configured": true, 00:11:10.454 "data_offset": 2048, 00:11:10.454 "data_size": 63488 00:11:10.454 } 00:11:10.454 ] 00:11:10.454 }' 00:11:10.454 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.454 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 [2024-11-26 06:20:54.952698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 17863236-da05-4d0a-8b0a-105f7e545f3a '!=' 17863236-da05-4d0a-8b0a-105f7e545f3a ']' 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63581 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63581 ']' 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63581 00:11:11.022 06:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63581 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.022 killing process with pid 63581 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63581' 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63581 00:11:11.022 [2024-11-26 06:20:55.034341] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.022 [2024-11-26 06:20:55.034474] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.022 06:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63581 00:11:11.022 [2024-11-26 06:20:55.034543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.022 [2024-11-26 06:20:55.034564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:11.282 [2024-11-26 06:20:55.289556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.663 ************************************ 00:11:12.663 END TEST raid_superblock_test 00:11:12.663 ************************************ 00:11:12.663 06:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:12.663 00:11:12.663 real 0m6.740s 00:11:12.663 user 0m9.992s 00:11:12.663 sys 0m1.282s 00:11:12.663 06:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.663 06:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.663 06:20:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:12.663 06:20:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.663 06:20:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.663 06:20:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.663 ************************************ 00:11:12.663 START TEST raid_read_error_test 00:11:12.663 ************************************ 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Gggd0Dej7y 00:11:12.663 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63921 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63921 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63921 ']' 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.664 06:20:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.923 [2024-11-26 06:20:56.825728] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:12.923 [2024-11-26 06:20:56.825855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63921 ] 00:11:12.923 [2024-11-26 06:20:57.002158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.181 [2024-11-26 06:20:57.157360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.441 [2024-11-26 06:20:57.407208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.441 [2024-11-26 06:20:57.407299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.767 BaseBdev1_malloc 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.767 true 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.767 [2024-11-26 06:20:57.798331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:13.767 [2024-11-26 06:20:57.798405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.767 [2024-11-26 06:20:57.798435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:13.767 [2024-11-26 06:20:57.798449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.767 [2024-11-26 06:20:57.801484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.767 [2024-11-26 06:20:57.801529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.767 BaseBdev1 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.767 BaseBdev2_malloc 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.767 true 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.767 [2024-11-26 06:20:57.876846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:13.767 [2024-11-26 06:20:57.876972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.767 [2024-11-26 06:20:57.877003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:13.767 [2024-11-26 06:20:57.877028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.767 [2024-11-26 06:20:57.880047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.767 [2024-11-26 06:20:57.880153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.767 BaseBdev2 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.767 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.036 [2024-11-26 06:20:57.884901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.036 [2024-11-26 06:20:57.887388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.036 [2024-11-26 06:20:57.887786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:14.036 [2024-11-26 06:20:57.887815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:14.036 [2024-11-26 06:20:57.888210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:14.036 [2024-11-26 06:20:57.888484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:14.036 [2024-11-26 06:20:57.888498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:14.036 [2024-11-26 06:20:57.888777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.036 "name": "raid_bdev1", 00:11:14.036 "uuid": "14cc6d86-d670-423e-b226-8d209eb5c7da", 00:11:14.036 "strip_size_kb": 0, 00:11:14.036 "state": "online", 00:11:14.036 "raid_level": "raid1", 00:11:14.036 "superblock": true, 00:11:14.036 "num_base_bdevs": 2, 00:11:14.036 "num_base_bdevs_discovered": 2, 00:11:14.036 "num_base_bdevs_operational": 2, 00:11:14.036 "base_bdevs_list": [ 00:11:14.036 { 00:11:14.036 "name": "BaseBdev1", 00:11:14.036 "uuid": "67c93e4d-b3a5-53d2-aa41-e1281734003c", 00:11:14.036 "is_configured": true, 00:11:14.036 "data_offset": 2048, 00:11:14.036 "data_size": 63488 00:11:14.036 }, 00:11:14.036 { 00:11:14.036 "name": "BaseBdev2", 00:11:14.036 "uuid": "000f8505-3fc9-57af-ac1b-52dd45b54506", 00:11:14.036 "is_configured": true, 00:11:14.036 "data_offset": 2048, 00:11:14.036 "data_size": 63488 00:11:14.036 } 00:11:14.036 ] 00:11:14.036 }' 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.036 06:20:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 06:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:14.296 06:20:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.556 [2024-11-26 06:20:58.469568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:15.494 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.495 "name": "raid_bdev1", 00:11:15.495 "uuid": "14cc6d86-d670-423e-b226-8d209eb5c7da", 00:11:15.495 "strip_size_kb": 0, 00:11:15.495 "state": "online", 00:11:15.495 "raid_level": "raid1", 00:11:15.495 "superblock": true, 00:11:15.495 "num_base_bdevs": 2, 00:11:15.495 "num_base_bdevs_discovered": 2, 00:11:15.495 "num_base_bdevs_operational": 2, 00:11:15.495 "base_bdevs_list": [ 00:11:15.495 { 00:11:15.495 "name": "BaseBdev1", 00:11:15.495 "uuid": "67c93e4d-b3a5-53d2-aa41-e1281734003c", 00:11:15.495 "is_configured": true, 00:11:15.495 "data_offset": 2048, 00:11:15.495 "data_size": 63488 00:11:15.495 }, 00:11:15.495 { 00:11:15.495 "name": "BaseBdev2", 00:11:15.495 "uuid": "000f8505-3fc9-57af-ac1b-52dd45b54506", 00:11:15.495 "is_configured": true, 00:11:15.495 "data_offset": 2048, 00:11:15.495 "data_size": 63488 00:11:15.495 } 00:11:15.495 ] 00:11:15.495 }' 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.495 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.754 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.754 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.754 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.754 [2024-11-26 06:20:59.812612] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.754 [2024-11-26 06:20:59.812747] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.754 [2024-11-26 06:20:59.815938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.754 [2024-11-26 06:20:59.816038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.754 [2024-11-26 06:20:59.816228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.754 [2024-11-26 06:20:59.816306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:15.754 { 00:11:15.754 "results": [ 00:11:15.754 { 00:11:15.754 "job": "raid_bdev1", 00:11:15.754 "core_mask": "0x1", 00:11:15.754 "workload": "randrw", 00:11:15.754 "percentage": 50, 00:11:15.754 "status": "finished", 00:11:15.754 "queue_depth": 1, 00:11:15.754 "io_size": 131072, 00:11:15.754 "runtime": 1.343416, 00:11:15.754 "iops": 12457.794160557862, 00:11:15.755 "mibps": 1557.2242700697327, 00:11:15.755 "io_failed": 0, 00:11:15.755 "io_timeout": 0, 00:11:15.755 "avg_latency_us": 77.3497440864345, 00:11:15.755 "min_latency_us": 26.1589519650655, 00:11:15.755 "max_latency_us": 1831.5737991266376 00:11:15.755 } 00:11:15.755 ], 00:11:15.755 "core_count": 1 00:11:15.755 } 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63921 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63921 ']' 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63921 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63921 00:11:15.755 killing process with pid 63921 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63921' 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63921 00:11:15.755 [2024-11-26 06:20:59.862296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.755 06:20:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63921 00:11:16.015 [2024-11-26 06:21:00.029496] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Gggd0Dej7y 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.396 ************************************ 00:11:17.396 END TEST raid_read_error_test 00:11:17.396 ************************************ 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:17.396 00:11:17.396 real 0m4.716s 00:11:17.396 user 0m5.519s 00:11:17.396 sys 0m0.704s 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.396 06:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.396 06:21:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:17.397 06:21:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:17.397 06:21:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.397 06:21:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.397 ************************************ 00:11:17.397 START TEST raid_write_error_test 00:11:17.397 ************************************ 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.utEn8O7yol 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64068 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64068 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64068 ']' 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.397 06:21:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.657 [2024-11-26 06:21:01.603642] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:17.657 [2024-11-26 06:21:01.603915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64068 ] 00:11:17.657 [2024-11-26 06:21:01.786625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.917 [2024-11-26 06:21:01.946013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.176 [2024-11-26 06:21:02.196915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.176 [2024-11-26 06:21:02.197030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.436 BaseBdev1_malloc 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.436 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 true 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 [2024-11-26 06:21:02.575170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:18.695 [2024-11-26 06:21:02.575233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.695 [2024-11-26 06:21:02.575255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:18.695 [2024-11-26 06:21:02.575266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.695 [2024-11-26 06:21:02.577757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.695 [2024-11-26 06:21:02.577801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:18.695 BaseBdev1 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 BaseBdev2_malloc 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 true 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.695 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 [2024-11-26 06:21:02.650969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:18.695 [2024-11-26 06:21:02.651032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.696 [2024-11-26 06:21:02.651143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:18.696 [2024-11-26 06:21:02.651178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.696 [2024-11-26 06:21:02.653743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.696 [2024-11-26 06:21:02.653785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:18.696 BaseBdev2 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.696 [2024-11-26 06:21:02.663015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.696 [2024-11-26 06:21:02.665219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.696 [2024-11-26 06:21:02.665500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.696 [2024-11-26 06:21:02.665521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:18.696 [2024-11-26 06:21:02.665782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:18.696 [2024-11-26 06:21:02.665982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.696 [2024-11-26 06:21:02.665994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:18.696 [2024-11-26 06:21:02.666174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.696 "name": "raid_bdev1", 00:11:18.696 "uuid": "140ded16-bf31-44ec-96aa-9c6f76286075", 00:11:18.696 "strip_size_kb": 0, 00:11:18.696 "state": "online", 00:11:18.696 "raid_level": "raid1", 00:11:18.696 "superblock": true, 00:11:18.696 "num_base_bdevs": 2, 00:11:18.696 "num_base_bdevs_discovered": 2, 00:11:18.696 "num_base_bdevs_operational": 2, 00:11:18.696 "base_bdevs_list": [ 00:11:18.696 { 00:11:18.696 "name": "BaseBdev1", 00:11:18.696 "uuid": "73700bd3-4251-5f98-aaf4-4a10af3d9d75", 00:11:18.696 "is_configured": true, 00:11:18.696 "data_offset": 2048, 00:11:18.696 "data_size": 63488 00:11:18.696 }, 00:11:18.696 { 00:11:18.696 "name": "BaseBdev2", 00:11:18.696 "uuid": "6eb73bac-72c1-5bb0-81dc-6f8aeddd544f", 00:11:18.696 "is_configured": true, 00:11:18.696 "data_offset": 2048, 00:11:18.696 "data_size": 63488 00:11:18.696 } 00:11:18.696 ] 00:11:18.696 }' 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.696 06:21:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.265 06:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:19.265 06:21:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:19.265 [2024-11-26 06:21:03.251938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.205 [2024-11-26 06:21:04.160651] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:20.205 [2024-11-26 06:21:04.160865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.205 [2024-11-26 06:21:04.161213] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.205 "name": "raid_bdev1", 00:11:20.205 "uuid": "140ded16-bf31-44ec-96aa-9c6f76286075", 00:11:20.205 "strip_size_kb": 0, 00:11:20.205 "state": "online", 00:11:20.205 "raid_level": "raid1", 00:11:20.205 "superblock": true, 00:11:20.205 "num_base_bdevs": 2, 00:11:20.205 "num_base_bdevs_discovered": 1, 00:11:20.205 "num_base_bdevs_operational": 1, 00:11:20.205 "base_bdevs_list": [ 00:11:20.205 { 00:11:20.205 "name": null, 00:11:20.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.205 "is_configured": false, 00:11:20.205 "data_offset": 0, 00:11:20.205 "data_size": 63488 00:11:20.205 }, 00:11:20.205 { 00:11:20.205 "name": "BaseBdev2", 00:11:20.205 "uuid": "6eb73bac-72c1-5bb0-81dc-6f8aeddd544f", 00:11:20.205 "is_configured": true, 00:11:20.205 "data_offset": 2048, 00:11:20.205 "data_size": 63488 00:11:20.205 } 00:11:20.205 ] 00:11:20.205 }' 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.205 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.535 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.535 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.535 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.535 [2024-11-26 06:21:04.602538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.535 [2024-11-26 06:21:04.602575] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.535 [2024-11-26 06:21:04.605364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.535 [2024-11-26 06:21:04.605415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.535 [2024-11-26 06:21:04.605480] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.535 [2024-11-26 06:21:04.605494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:20.536 { 00:11:20.536 "results": [ 00:11:20.536 { 00:11:20.536 "job": "raid_bdev1", 00:11:20.536 "core_mask": "0x1", 00:11:20.536 "workload": "randrw", 00:11:20.536 "percentage": 50, 00:11:20.536 "status": "finished", 00:11:20.536 "queue_depth": 1, 00:11:20.536 "io_size": 131072, 00:11:20.536 "runtime": 1.350575, 00:11:20.536 "iops": 14272.43951650223, 00:11:20.536 "mibps": 1784.0549395627788, 00:11:20.536 "io_failed": 0, 00:11:20.536 "io_timeout": 0, 00:11:20.536 "avg_latency_us": 67.04297961761623, 00:11:20.536 "min_latency_us": 23.36419213973799, 00:11:20.536 "max_latency_us": 1874.5013100436681 00:11:20.536 } 00:11:20.536 ], 00:11:20.536 "core_count": 1 00:11:20.536 } 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64068 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64068 ']' 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64068 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64068 00:11:20.536 killing process with pid 64068 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64068' 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64068 00:11:20.536 [2024-11-26 06:21:04.650800] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.536 06:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64068 00:11:20.794 [2024-11-26 06:21:04.805204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.utEn8O7yol 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:22.176 00:11:22.176 real 0m4.713s 00:11:22.176 user 0m5.512s 00:11:22.176 sys 0m0.686s 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.176 06:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.176 ************************************ 00:11:22.176 END TEST raid_write_error_test 00:11:22.176 ************************************ 00:11:22.176 06:21:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:22.176 06:21:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:22.176 06:21:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:22.176 06:21:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.176 06:21:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.176 06:21:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.176 ************************************ 00:11:22.176 START TEST raid_state_function_test 00:11:22.176 ************************************ 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64206 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64206' 00:11:22.176 Process raid pid: 64206 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64206 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64206 ']' 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.176 06:21:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.436 [2024-11-26 06:21:06.387285] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:22.436 [2024-11-26 06:21:06.387597] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.695 [2024-11-26 06:21:06.589739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.695 [2024-11-26 06:21:06.744012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.953 [2024-11-26 06:21:07.005275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.953 [2024-11-26 06:21:07.005339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.214 [2024-11-26 06:21:07.254990] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.214 [2024-11-26 06:21:07.255083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.214 [2024-11-26 06:21:07.255096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.214 [2024-11-26 06:21:07.255107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.214 [2024-11-26 06:21:07.255114] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.214 [2024-11-26 06:21:07.255124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.214 "name": "Existed_Raid", 00:11:23.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.214 "strip_size_kb": 64, 00:11:23.214 "state": "configuring", 00:11:23.214 "raid_level": "raid0", 00:11:23.214 "superblock": false, 00:11:23.214 "num_base_bdevs": 3, 00:11:23.214 "num_base_bdevs_discovered": 0, 00:11:23.214 "num_base_bdevs_operational": 3, 00:11:23.214 "base_bdevs_list": [ 00:11:23.214 { 00:11:23.214 "name": "BaseBdev1", 00:11:23.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.214 "is_configured": false, 00:11:23.214 "data_offset": 0, 00:11:23.214 "data_size": 0 00:11:23.214 }, 00:11:23.214 { 00:11:23.214 "name": "BaseBdev2", 00:11:23.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.214 "is_configured": false, 00:11:23.214 "data_offset": 0, 00:11:23.214 "data_size": 0 00:11:23.214 }, 00:11:23.214 { 00:11:23.214 "name": "BaseBdev3", 00:11:23.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.214 "is_configured": false, 00:11:23.214 "data_offset": 0, 00:11:23.214 "data_size": 0 00:11:23.214 } 00:11:23.214 ] 00:11:23.214 }' 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.214 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 [2024-11-26 06:21:07.710150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.808 [2024-11-26 06:21:07.710251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 [2024-11-26 06:21:07.722088] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.808 [2024-11-26 06:21:07.722178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.808 [2024-11-26 06:21:07.722206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.808 [2024-11-26 06:21:07.722230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.808 [2024-11-26 06:21:07.722249] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.808 [2024-11-26 06:21:07.722271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.808 [2024-11-26 06:21:07.779600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.808 BaseBdev1 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.808 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.809 [ 00:11:23.809 { 00:11:23.809 "name": "BaseBdev1", 00:11:23.809 "aliases": [ 00:11:23.809 "616b7564-f6e9-420c-bfb8-1ed3e34dc126" 00:11:23.809 ], 00:11:23.809 "product_name": "Malloc disk", 00:11:23.809 "block_size": 512, 00:11:23.809 "num_blocks": 65536, 00:11:23.809 "uuid": "616b7564-f6e9-420c-bfb8-1ed3e34dc126", 00:11:23.809 "assigned_rate_limits": { 00:11:23.809 "rw_ios_per_sec": 0, 00:11:23.809 "rw_mbytes_per_sec": 0, 00:11:23.809 "r_mbytes_per_sec": 0, 00:11:23.809 "w_mbytes_per_sec": 0 00:11:23.809 }, 00:11:23.809 "claimed": true, 00:11:23.809 "claim_type": "exclusive_write", 00:11:23.809 "zoned": false, 00:11:23.809 "supported_io_types": { 00:11:23.809 "read": true, 00:11:23.809 "write": true, 00:11:23.809 "unmap": true, 00:11:23.809 "flush": true, 00:11:23.809 "reset": true, 00:11:23.809 "nvme_admin": false, 00:11:23.809 "nvme_io": false, 00:11:23.809 "nvme_io_md": false, 00:11:23.809 "write_zeroes": true, 00:11:23.809 "zcopy": true, 00:11:23.809 "get_zone_info": false, 00:11:23.809 "zone_management": false, 00:11:23.809 "zone_append": false, 00:11:23.809 "compare": false, 00:11:23.809 "compare_and_write": false, 00:11:23.809 "abort": true, 00:11:23.809 "seek_hole": false, 00:11:23.809 "seek_data": false, 00:11:23.809 "copy": true, 00:11:23.809 "nvme_iov_md": false 00:11:23.809 }, 00:11:23.809 "memory_domains": [ 00:11:23.809 { 00:11:23.809 "dma_device_id": "system", 00:11:23.809 "dma_device_type": 1 00:11:23.809 }, 00:11:23.809 { 00:11:23.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.809 "dma_device_type": 2 00:11:23.809 } 00:11:23.809 ], 00:11:23.809 "driver_specific": {} 00:11:23.809 } 00:11:23.809 ] 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.809 "name": "Existed_Raid", 00:11:23.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.809 "strip_size_kb": 64, 00:11:23.809 "state": "configuring", 00:11:23.809 "raid_level": "raid0", 00:11:23.809 "superblock": false, 00:11:23.809 "num_base_bdevs": 3, 00:11:23.809 "num_base_bdevs_discovered": 1, 00:11:23.809 "num_base_bdevs_operational": 3, 00:11:23.809 "base_bdevs_list": [ 00:11:23.809 { 00:11:23.809 "name": "BaseBdev1", 00:11:23.809 "uuid": "616b7564-f6e9-420c-bfb8-1ed3e34dc126", 00:11:23.809 "is_configured": true, 00:11:23.809 "data_offset": 0, 00:11:23.809 "data_size": 65536 00:11:23.809 }, 00:11:23.809 { 00:11:23.809 "name": "BaseBdev2", 00:11:23.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.809 "is_configured": false, 00:11:23.809 "data_offset": 0, 00:11:23.809 "data_size": 0 00:11:23.809 }, 00:11:23.809 { 00:11:23.809 "name": "BaseBdev3", 00:11:23.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.809 "is_configured": false, 00:11:23.809 "data_offset": 0, 00:11:23.809 "data_size": 0 00:11:23.809 } 00:11:23.809 ] 00:11:23.809 }' 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.809 06:21:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.377 [2024-11-26 06:21:08.318797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.377 [2024-11-26 06:21:08.318944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.377 [2024-11-26 06:21:08.330793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.377 [2024-11-26 06:21:08.333143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.377 [2024-11-26 06:21:08.333236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.377 [2024-11-26 06:21:08.333271] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.377 [2024-11-26 06:21:08.333297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.377 "name": "Existed_Raid", 00:11:24.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.377 "strip_size_kb": 64, 00:11:24.377 "state": "configuring", 00:11:24.377 "raid_level": "raid0", 00:11:24.377 "superblock": false, 00:11:24.377 "num_base_bdevs": 3, 00:11:24.377 "num_base_bdevs_discovered": 1, 00:11:24.377 "num_base_bdevs_operational": 3, 00:11:24.377 "base_bdevs_list": [ 00:11:24.377 { 00:11:24.377 "name": "BaseBdev1", 00:11:24.377 "uuid": "616b7564-f6e9-420c-bfb8-1ed3e34dc126", 00:11:24.377 "is_configured": true, 00:11:24.377 "data_offset": 0, 00:11:24.377 "data_size": 65536 00:11:24.377 }, 00:11:24.377 { 00:11:24.377 "name": "BaseBdev2", 00:11:24.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.377 "is_configured": false, 00:11:24.377 "data_offset": 0, 00:11:24.377 "data_size": 0 00:11:24.377 }, 00:11:24.377 { 00:11:24.377 "name": "BaseBdev3", 00:11:24.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.377 "is_configured": false, 00:11:24.377 "data_offset": 0, 00:11:24.377 "data_size": 0 00:11:24.377 } 00:11:24.377 ] 00:11:24.377 }' 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.377 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.945 [2024-11-26 06:21:08.932106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.945 BaseBdev2 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.945 [ 00:11:24.945 { 00:11:24.945 "name": "BaseBdev2", 00:11:24.945 "aliases": [ 00:11:24.945 "04571f2e-1ead-4aa7-976e-a3211ec93572" 00:11:24.945 ], 00:11:24.945 "product_name": "Malloc disk", 00:11:24.945 "block_size": 512, 00:11:24.945 "num_blocks": 65536, 00:11:24.945 "uuid": "04571f2e-1ead-4aa7-976e-a3211ec93572", 00:11:24.945 "assigned_rate_limits": { 00:11:24.945 "rw_ios_per_sec": 0, 00:11:24.945 "rw_mbytes_per_sec": 0, 00:11:24.945 "r_mbytes_per_sec": 0, 00:11:24.945 "w_mbytes_per_sec": 0 00:11:24.945 }, 00:11:24.945 "claimed": true, 00:11:24.945 "claim_type": "exclusive_write", 00:11:24.945 "zoned": false, 00:11:24.945 "supported_io_types": { 00:11:24.945 "read": true, 00:11:24.945 "write": true, 00:11:24.945 "unmap": true, 00:11:24.945 "flush": true, 00:11:24.945 "reset": true, 00:11:24.945 "nvme_admin": false, 00:11:24.945 "nvme_io": false, 00:11:24.945 "nvme_io_md": false, 00:11:24.945 "write_zeroes": true, 00:11:24.945 "zcopy": true, 00:11:24.945 "get_zone_info": false, 00:11:24.945 "zone_management": false, 00:11:24.945 "zone_append": false, 00:11:24.945 "compare": false, 00:11:24.945 "compare_and_write": false, 00:11:24.945 "abort": true, 00:11:24.945 "seek_hole": false, 00:11:24.945 "seek_data": false, 00:11:24.945 "copy": true, 00:11:24.945 "nvme_iov_md": false 00:11:24.945 }, 00:11:24.945 "memory_domains": [ 00:11:24.945 { 00:11:24.945 "dma_device_id": "system", 00:11:24.945 "dma_device_type": 1 00:11:24.945 }, 00:11:24.945 { 00:11:24.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.945 "dma_device_type": 2 00:11:24.945 } 00:11:24.945 ], 00:11:24.945 "driver_specific": {} 00:11:24.945 } 00:11:24.945 ] 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.945 06:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.945 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.945 "name": "Existed_Raid", 00:11:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.945 "strip_size_kb": 64, 00:11:24.945 "state": "configuring", 00:11:24.945 "raid_level": "raid0", 00:11:24.945 "superblock": false, 00:11:24.945 "num_base_bdevs": 3, 00:11:24.945 "num_base_bdevs_discovered": 2, 00:11:24.945 "num_base_bdevs_operational": 3, 00:11:24.945 "base_bdevs_list": [ 00:11:24.945 { 00:11:24.945 "name": "BaseBdev1", 00:11:24.945 "uuid": "616b7564-f6e9-420c-bfb8-1ed3e34dc126", 00:11:24.945 "is_configured": true, 00:11:24.945 "data_offset": 0, 00:11:24.945 "data_size": 65536 00:11:24.945 }, 00:11:24.945 { 00:11:24.945 "name": "BaseBdev2", 00:11:24.945 "uuid": "04571f2e-1ead-4aa7-976e-a3211ec93572", 00:11:24.945 "is_configured": true, 00:11:24.945 "data_offset": 0, 00:11:24.945 "data_size": 65536 00:11:24.945 }, 00:11:24.945 { 00:11:24.945 "name": "BaseBdev3", 00:11:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.945 "is_configured": false, 00:11:24.945 "data_offset": 0, 00:11:24.945 "data_size": 0 00:11:24.945 } 00:11:24.945 ] 00:11:24.945 }' 00:11:24.946 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.946 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.514 [2024-11-26 06:21:09.527019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:25.514 [2024-11-26 06:21:09.527165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:25.514 [2024-11-26 06:21:09.527204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:25.514 [2024-11-26 06:21:09.527606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:25.514 [2024-11-26 06:21:09.527874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:25.514 [2024-11-26 06:21:09.527922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:25.514 [2024-11-26 06:21:09.528321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.514 BaseBdev3 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.514 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.514 [ 00:11:25.514 { 00:11:25.514 "name": "BaseBdev3", 00:11:25.514 "aliases": [ 00:11:25.514 "f01761ef-c315-4c36-832d-50cd32f1713d" 00:11:25.514 ], 00:11:25.514 "product_name": "Malloc disk", 00:11:25.514 "block_size": 512, 00:11:25.514 "num_blocks": 65536, 00:11:25.514 "uuid": "f01761ef-c315-4c36-832d-50cd32f1713d", 00:11:25.514 "assigned_rate_limits": { 00:11:25.514 "rw_ios_per_sec": 0, 00:11:25.514 "rw_mbytes_per_sec": 0, 00:11:25.514 "r_mbytes_per_sec": 0, 00:11:25.514 "w_mbytes_per_sec": 0 00:11:25.514 }, 00:11:25.514 "claimed": true, 00:11:25.514 "claim_type": "exclusive_write", 00:11:25.514 "zoned": false, 00:11:25.514 "supported_io_types": { 00:11:25.514 "read": true, 00:11:25.514 "write": true, 00:11:25.514 "unmap": true, 00:11:25.514 "flush": true, 00:11:25.514 "reset": true, 00:11:25.514 "nvme_admin": false, 00:11:25.514 "nvme_io": false, 00:11:25.514 "nvme_io_md": false, 00:11:25.514 "write_zeroes": true, 00:11:25.514 "zcopy": true, 00:11:25.514 "get_zone_info": false, 00:11:25.514 "zone_management": false, 00:11:25.514 "zone_append": false, 00:11:25.515 "compare": false, 00:11:25.515 "compare_and_write": false, 00:11:25.515 "abort": true, 00:11:25.515 "seek_hole": false, 00:11:25.515 "seek_data": false, 00:11:25.515 "copy": true, 00:11:25.515 "nvme_iov_md": false 00:11:25.515 }, 00:11:25.515 "memory_domains": [ 00:11:25.515 { 00:11:25.515 "dma_device_id": "system", 00:11:25.515 "dma_device_type": 1 00:11:25.515 }, 00:11:25.515 { 00:11:25.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.515 "dma_device_type": 2 00:11:25.515 } 00:11:25.515 ], 00:11:25.515 "driver_specific": {} 00:11:25.515 } 00:11:25.515 ] 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.515 "name": "Existed_Raid", 00:11:25.515 "uuid": "27159d84-efdd-4b0a-912a-20c3a5d5f5f9", 00:11:25.515 "strip_size_kb": 64, 00:11:25.515 "state": "online", 00:11:25.515 "raid_level": "raid0", 00:11:25.515 "superblock": false, 00:11:25.515 "num_base_bdevs": 3, 00:11:25.515 "num_base_bdevs_discovered": 3, 00:11:25.515 "num_base_bdevs_operational": 3, 00:11:25.515 "base_bdevs_list": [ 00:11:25.515 { 00:11:25.515 "name": "BaseBdev1", 00:11:25.515 "uuid": "616b7564-f6e9-420c-bfb8-1ed3e34dc126", 00:11:25.515 "is_configured": true, 00:11:25.515 "data_offset": 0, 00:11:25.515 "data_size": 65536 00:11:25.515 }, 00:11:25.515 { 00:11:25.515 "name": "BaseBdev2", 00:11:25.515 "uuid": "04571f2e-1ead-4aa7-976e-a3211ec93572", 00:11:25.515 "is_configured": true, 00:11:25.515 "data_offset": 0, 00:11:25.515 "data_size": 65536 00:11:25.515 }, 00:11:25.515 { 00:11:25.515 "name": "BaseBdev3", 00:11:25.515 "uuid": "f01761ef-c315-4c36-832d-50cd32f1713d", 00:11:25.515 "is_configured": true, 00:11:25.515 "data_offset": 0, 00:11:25.515 "data_size": 65536 00:11:25.515 } 00:11:25.515 ] 00:11:25.515 }' 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.515 06:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.084 [2024-11-26 06:21:10.030640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.084 "name": "Existed_Raid", 00:11:26.084 "aliases": [ 00:11:26.084 "27159d84-efdd-4b0a-912a-20c3a5d5f5f9" 00:11:26.084 ], 00:11:26.084 "product_name": "Raid Volume", 00:11:26.084 "block_size": 512, 00:11:26.084 "num_blocks": 196608, 00:11:26.084 "uuid": "27159d84-efdd-4b0a-912a-20c3a5d5f5f9", 00:11:26.084 "assigned_rate_limits": { 00:11:26.084 "rw_ios_per_sec": 0, 00:11:26.084 "rw_mbytes_per_sec": 0, 00:11:26.084 "r_mbytes_per_sec": 0, 00:11:26.084 "w_mbytes_per_sec": 0 00:11:26.084 }, 00:11:26.084 "claimed": false, 00:11:26.084 "zoned": false, 00:11:26.084 "supported_io_types": { 00:11:26.084 "read": true, 00:11:26.084 "write": true, 00:11:26.084 "unmap": true, 00:11:26.084 "flush": true, 00:11:26.084 "reset": true, 00:11:26.084 "nvme_admin": false, 00:11:26.084 "nvme_io": false, 00:11:26.084 "nvme_io_md": false, 00:11:26.084 "write_zeroes": true, 00:11:26.084 "zcopy": false, 00:11:26.084 "get_zone_info": false, 00:11:26.084 "zone_management": false, 00:11:26.084 "zone_append": false, 00:11:26.084 "compare": false, 00:11:26.084 "compare_and_write": false, 00:11:26.084 "abort": false, 00:11:26.084 "seek_hole": false, 00:11:26.084 "seek_data": false, 00:11:26.084 "copy": false, 00:11:26.084 "nvme_iov_md": false 00:11:26.084 }, 00:11:26.084 "memory_domains": [ 00:11:26.084 { 00:11:26.084 "dma_device_id": "system", 00:11:26.084 "dma_device_type": 1 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.084 "dma_device_type": 2 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "dma_device_id": "system", 00:11:26.084 "dma_device_type": 1 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.084 "dma_device_type": 2 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "dma_device_id": "system", 00:11:26.084 "dma_device_type": 1 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.084 "dma_device_type": 2 00:11:26.084 } 00:11:26.084 ], 00:11:26.084 "driver_specific": { 00:11:26.084 "raid": { 00:11:26.084 "uuid": "27159d84-efdd-4b0a-912a-20c3a5d5f5f9", 00:11:26.084 "strip_size_kb": 64, 00:11:26.084 "state": "online", 00:11:26.084 "raid_level": "raid0", 00:11:26.084 "superblock": false, 00:11:26.084 "num_base_bdevs": 3, 00:11:26.084 "num_base_bdevs_discovered": 3, 00:11:26.084 "num_base_bdevs_operational": 3, 00:11:26.084 "base_bdevs_list": [ 00:11:26.084 { 00:11:26.084 "name": "BaseBdev1", 00:11:26.084 "uuid": "616b7564-f6e9-420c-bfb8-1ed3e34dc126", 00:11:26.084 "is_configured": true, 00:11:26.084 "data_offset": 0, 00:11:26.084 "data_size": 65536 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "name": "BaseBdev2", 00:11:26.084 "uuid": "04571f2e-1ead-4aa7-976e-a3211ec93572", 00:11:26.084 "is_configured": true, 00:11:26.084 "data_offset": 0, 00:11:26.084 "data_size": 65536 00:11:26.084 }, 00:11:26.084 { 00:11:26.084 "name": "BaseBdev3", 00:11:26.084 "uuid": "f01761ef-c315-4c36-832d-50cd32f1713d", 00:11:26.084 "is_configured": true, 00:11:26.084 "data_offset": 0, 00:11:26.084 "data_size": 65536 00:11:26.084 } 00:11:26.084 ] 00:11:26.084 } 00:11:26.084 } 00:11:26.084 }' 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.084 BaseBdev2 00:11:26.084 BaseBdev3' 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.084 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.344 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.345 [2024-11-26 06:21:10.321819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.345 [2024-11-26 06:21:10.321854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.345 [2024-11-26 06:21:10.321921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.345 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.604 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.604 "name": "Existed_Raid", 00:11:26.604 "uuid": "27159d84-efdd-4b0a-912a-20c3a5d5f5f9", 00:11:26.604 "strip_size_kb": 64, 00:11:26.604 "state": "offline", 00:11:26.604 "raid_level": "raid0", 00:11:26.604 "superblock": false, 00:11:26.604 "num_base_bdevs": 3, 00:11:26.604 "num_base_bdevs_discovered": 2, 00:11:26.604 "num_base_bdevs_operational": 2, 00:11:26.604 "base_bdevs_list": [ 00:11:26.604 { 00:11:26.604 "name": null, 00:11:26.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.604 "is_configured": false, 00:11:26.604 "data_offset": 0, 00:11:26.604 "data_size": 65536 00:11:26.604 }, 00:11:26.604 { 00:11:26.604 "name": "BaseBdev2", 00:11:26.604 "uuid": "04571f2e-1ead-4aa7-976e-a3211ec93572", 00:11:26.604 "is_configured": true, 00:11:26.605 "data_offset": 0, 00:11:26.605 "data_size": 65536 00:11:26.605 }, 00:11:26.605 { 00:11:26.605 "name": "BaseBdev3", 00:11:26.605 "uuid": "f01761ef-c315-4c36-832d-50cd32f1713d", 00:11:26.605 "is_configured": true, 00:11:26.605 "data_offset": 0, 00:11:26.605 "data_size": 65536 00:11:26.605 } 00:11:26.605 ] 00:11:26.605 }' 00:11:26.605 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.605 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.934 06:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.934 [2024-11-26 06:21:10.964950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 [2024-11-26 06:21:11.133933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.194 [2024-11-26 06:21:11.134072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.194 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 BaseBdev2 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 [ 00:11:27.455 { 00:11:27.455 "name": "BaseBdev2", 00:11:27.455 "aliases": [ 00:11:27.455 "2a6d210e-20fc-440f-8f63-e27634bd7ac7" 00:11:27.455 ], 00:11:27.455 "product_name": "Malloc disk", 00:11:27.455 "block_size": 512, 00:11:27.455 "num_blocks": 65536, 00:11:27.455 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:27.455 "assigned_rate_limits": { 00:11:27.455 "rw_ios_per_sec": 0, 00:11:27.455 "rw_mbytes_per_sec": 0, 00:11:27.455 "r_mbytes_per_sec": 0, 00:11:27.455 "w_mbytes_per_sec": 0 00:11:27.455 }, 00:11:27.455 "claimed": false, 00:11:27.455 "zoned": false, 00:11:27.455 "supported_io_types": { 00:11:27.455 "read": true, 00:11:27.455 "write": true, 00:11:27.455 "unmap": true, 00:11:27.455 "flush": true, 00:11:27.455 "reset": true, 00:11:27.455 "nvme_admin": false, 00:11:27.455 "nvme_io": false, 00:11:27.455 "nvme_io_md": false, 00:11:27.455 "write_zeroes": true, 00:11:27.455 "zcopy": true, 00:11:27.455 "get_zone_info": false, 00:11:27.455 "zone_management": false, 00:11:27.455 "zone_append": false, 00:11:27.455 "compare": false, 00:11:27.455 "compare_and_write": false, 00:11:27.455 "abort": true, 00:11:27.455 "seek_hole": false, 00:11:27.455 "seek_data": false, 00:11:27.455 "copy": true, 00:11:27.455 "nvme_iov_md": false 00:11:27.455 }, 00:11:27.455 "memory_domains": [ 00:11:27.455 { 00:11:27.455 "dma_device_id": "system", 00:11:27.455 "dma_device_type": 1 00:11:27.455 }, 00:11:27.455 { 00:11:27.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.455 "dma_device_type": 2 00:11:27.455 } 00:11:27.455 ], 00:11:27.455 "driver_specific": {} 00:11:27.455 } 00:11:27.455 ] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 BaseBdev3 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.455 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.455 [ 00:11:27.455 { 00:11:27.455 "name": "BaseBdev3", 00:11:27.455 "aliases": [ 00:11:27.455 "fd5323eb-3ed1-4a76-8659-845f751365bb" 00:11:27.455 ], 00:11:27.455 "product_name": "Malloc disk", 00:11:27.455 "block_size": 512, 00:11:27.455 "num_blocks": 65536, 00:11:27.455 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:27.455 "assigned_rate_limits": { 00:11:27.455 "rw_ios_per_sec": 0, 00:11:27.455 "rw_mbytes_per_sec": 0, 00:11:27.455 "r_mbytes_per_sec": 0, 00:11:27.455 "w_mbytes_per_sec": 0 00:11:27.455 }, 00:11:27.455 "claimed": false, 00:11:27.455 "zoned": false, 00:11:27.455 "supported_io_types": { 00:11:27.455 "read": true, 00:11:27.455 "write": true, 00:11:27.455 "unmap": true, 00:11:27.455 "flush": true, 00:11:27.455 "reset": true, 00:11:27.455 "nvme_admin": false, 00:11:27.455 "nvme_io": false, 00:11:27.455 "nvme_io_md": false, 00:11:27.455 "write_zeroes": true, 00:11:27.455 "zcopy": true, 00:11:27.455 "get_zone_info": false, 00:11:27.455 "zone_management": false, 00:11:27.455 "zone_append": false, 00:11:27.455 "compare": false, 00:11:27.455 "compare_and_write": false, 00:11:27.455 "abort": true, 00:11:27.455 "seek_hole": false, 00:11:27.455 "seek_data": false, 00:11:27.456 "copy": true, 00:11:27.456 "nvme_iov_md": false 00:11:27.456 }, 00:11:27.456 "memory_domains": [ 00:11:27.456 { 00:11:27.456 "dma_device_id": "system", 00:11:27.456 "dma_device_type": 1 00:11:27.456 }, 00:11:27.456 { 00:11:27.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.456 "dma_device_type": 2 00:11:27.456 } 00:11:27.456 ], 00:11:27.456 "driver_specific": {} 00:11:27.456 } 00:11:27.456 ] 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.456 [2024-11-26 06:21:11.486570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.456 [2024-11-26 06:21:11.486670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.456 [2024-11-26 06:21:11.486743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.456 [2024-11-26 06:21:11.489144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.456 "name": "Existed_Raid", 00:11:27.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.456 "strip_size_kb": 64, 00:11:27.456 "state": "configuring", 00:11:27.456 "raid_level": "raid0", 00:11:27.456 "superblock": false, 00:11:27.456 "num_base_bdevs": 3, 00:11:27.456 "num_base_bdevs_discovered": 2, 00:11:27.456 "num_base_bdevs_operational": 3, 00:11:27.456 "base_bdevs_list": [ 00:11:27.456 { 00:11:27.456 "name": "BaseBdev1", 00:11:27.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.456 "is_configured": false, 00:11:27.456 "data_offset": 0, 00:11:27.456 "data_size": 0 00:11:27.456 }, 00:11:27.456 { 00:11:27.456 "name": "BaseBdev2", 00:11:27.456 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:27.456 "is_configured": true, 00:11:27.456 "data_offset": 0, 00:11:27.456 "data_size": 65536 00:11:27.456 }, 00:11:27.456 { 00:11:27.456 "name": "BaseBdev3", 00:11:27.456 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:27.456 "is_configured": true, 00:11:27.456 "data_offset": 0, 00:11:27.456 "data_size": 65536 00:11:27.456 } 00:11:27.456 ] 00:11:27.456 }' 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.456 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.025 [2024-11-26 06:21:11.877943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.025 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.025 "name": "Existed_Raid", 00:11:28.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.025 "strip_size_kb": 64, 00:11:28.025 "state": "configuring", 00:11:28.025 "raid_level": "raid0", 00:11:28.025 "superblock": false, 00:11:28.025 "num_base_bdevs": 3, 00:11:28.025 "num_base_bdevs_discovered": 1, 00:11:28.025 "num_base_bdevs_operational": 3, 00:11:28.025 "base_bdevs_list": [ 00:11:28.025 { 00:11:28.025 "name": "BaseBdev1", 00:11:28.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.025 "is_configured": false, 00:11:28.025 "data_offset": 0, 00:11:28.025 "data_size": 0 00:11:28.025 }, 00:11:28.025 { 00:11:28.026 "name": null, 00:11:28.026 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:28.026 "is_configured": false, 00:11:28.026 "data_offset": 0, 00:11:28.026 "data_size": 65536 00:11:28.026 }, 00:11:28.026 { 00:11:28.026 "name": "BaseBdev3", 00:11:28.026 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:28.026 "is_configured": true, 00:11:28.026 "data_offset": 0, 00:11:28.026 "data_size": 65536 00:11:28.026 } 00:11:28.026 ] 00:11:28.026 }' 00:11:28.026 06:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.026 06:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.285 [2024-11-26 06:21:12.400075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.285 BaseBdev1 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.285 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.544 [ 00:11:28.544 { 00:11:28.544 "name": "BaseBdev1", 00:11:28.544 "aliases": [ 00:11:28.544 "9ab28498-b712-4082-9c78-c60b8d3099a8" 00:11:28.544 ], 00:11:28.544 "product_name": "Malloc disk", 00:11:28.544 "block_size": 512, 00:11:28.544 "num_blocks": 65536, 00:11:28.544 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:28.544 "assigned_rate_limits": { 00:11:28.544 "rw_ios_per_sec": 0, 00:11:28.544 "rw_mbytes_per_sec": 0, 00:11:28.544 "r_mbytes_per_sec": 0, 00:11:28.544 "w_mbytes_per_sec": 0 00:11:28.544 }, 00:11:28.544 "claimed": true, 00:11:28.544 "claim_type": "exclusive_write", 00:11:28.544 "zoned": false, 00:11:28.544 "supported_io_types": { 00:11:28.544 "read": true, 00:11:28.544 "write": true, 00:11:28.544 "unmap": true, 00:11:28.544 "flush": true, 00:11:28.544 "reset": true, 00:11:28.544 "nvme_admin": false, 00:11:28.544 "nvme_io": false, 00:11:28.544 "nvme_io_md": false, 00:11:28.544 "write_zeroes": true, 00:11:28.544 "zcopy": true, 00:11:28.544 "get_zone_info": false, 00:11:28.544 "zone_management": false, 00:11:28.544 "zone_append": false, 00:11:28.544 "compare": false, 00:11:28.544 "compare_and_write": false, 00:11:28.544 "abort": true, 00:11:28.544 "seek_hole": false, 00:11:28.544 "seek_data": false, 00:11:28.544 "copy": true, 00:11:28.544 "nvme_iov_md": false 00:11:28.544 }, 00:11:28.544 "memory_domains": [ 00:11:28.544 { 00:11:28.544 "dma_device_id": "system", 00:11:28.544 "dma_device_type": 1 00:11:28.544 }, 00:11:28.544 { 00:11:28.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.544 "dma_device_type": 2 00:11:28.544 } 00:11:28.544 ], 00:11:28.544 "driver_specific": {} 00:11:28.544 } 00:11:28.544 ] 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.544 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.545 "name": "Existed_Raid", 00:11:28.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.545 "strip_size_kb": 64, 00:11:28.545 "state": "configuring", 00:11:28.545 "raid_level": "raid0", 00:11:28.545 "superblock": false, 00:11:28.545 "num_base_bdevs": 3, 00:11:28.545 "num_base_bdevs_discovered": 2, 00:11:28.545 "num_base_bdevs_operational": 3, 00:11:28.545 "base_bdevs_list": [ 00:11:28.545 { 00:11:28.545 "name": "BaseBdev1", 00:11:28.545 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:28.545 "is_configured": true, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 }, 00:11:28.545 { 00:11:28.545 "name": null, 00:11:28.545 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:28.545 "is_configured": false, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 }, 00:11:28.545 { 00:11:28.545 "name": "BaseBdev3", 00:11:28.545 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:28.545 "is_configured": true, 00:11:28.545 "data_offset": 0, 00:11:28.545 "data_size": 65536 00:11:28.545 } 00:11:28.545 ] 00:11:28.545 }' 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.545 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.805 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 [2024-11-26 06:21:12.939248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.064 "name": "Existed_Raid", 00:11:29.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.064 "strip_size_kb": 64, 00:11:29.064 "state": "configuring", 00:11:29.064 "raid_level": "raid0", 00:11:29.064 "superblock": false, 00:11:29.064 "num_base_bdevs": 3, 00:11:29.064 "num_base_bdevs_discovered": 1, 00:11:29.064 "num_base_bdevs_operational": 3, 00:11:29.064 "base_bdevs_list": [ 00:11:29.064 { 00:11:29.064 "name": "BaseBdev1", 00:11:29.064 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:29.064 "is_configured": true, 00:11:29.064 "data_offset": 0, 00:11:29.064 "data_size": 65536 00:11:29.064 }, 00:11:29.064 { 00:11:29.064 "name": null, 00:11:29.064 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:29.064 "is_configured": false, 00:11:29.064 "data_offset": 0, 00:11:29.064 "data_size": 65536 00:11:29.064 }, 00:11:29.064 { 00:11:29.064 "name": null, 00:11:29.064 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:29.064 "is_configured": false, 00:11:29.064 "data_offset": 0, 00:11:29.064 "data_size": 65536 00:11:29.064 } 00:11:29.064 ] 00:11:29.064 }' 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.064 06:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.325 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.325 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.325 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.326 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.326 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 [2024-11-26 06:21:13.486380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.586 "name": "Existed_Raid", 00:11:29.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.586 "strip_size_kb": 64, 00:11:29.586 "state": "configuring", 00:11:29.586 "raid_level": "raid0", 00:11:29.586 "superblock": false, 00:11:29.586 "num_base_bdevs": 3, 00:11:29.586 "num_base_bdevs_discovered": 2, 00:11:29.586 "num_base_bdevs_operational": 3, 00:11:29.586 "base_bdevs_list": [ 00:11:29.586 { 00:11:29.586 "name": "BaseBdev1", 00:11:29.586 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:29.586 "is_configured": true, 00:11:29.586 "data_offset": 0, 00:11:29.586 "data_size": 65536 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "name": null, 00:11:29.586 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:29.586 "is_configured": false, 00:11:29.586 "data_offset": 0, 00:11:29.586 "data_size": 65536 00:11:29.586 }, 00:11:29.586 { 00:11:29.586 "name": "BaseBdev3", 00:11:29.586 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:29.586 "is_configured": true, 00:11:29.586 "data_offset": 0, 00:11:29.586 "data_size": 65536 00:11:29.586 } 00:11:29.586 ] 00:11:29.586 }' 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.586 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.846 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.846 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.846 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.846 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.846 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.105 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.106 06:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.106 06:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.106 [2024-11-26 06:21:14.005508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.106 "name": "Existed_Raid", 00:11:30.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.106 "strip_size_kb": 64, 00:11:30.106 "state": "configuring", 00:11:30.106 "raid_level": "raid0", 00:11:30.106 "superblock": false, 00:11:30.106 "num_base_bdevs": 3, 00:11:30.106 "num_base_bdevs_discovered": 1, 00:11:30.106 "num_base_bdevs_operational": 3, 00:11:30.106 "base_bdevs_list": [ 00:11:30.106 { 00:11:30.106 "name": null, 00:11:30.106 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:30.106 "is_configured": false, 00:11:30.106 "data_offset": 0, 00:11:30.106 "data_size": 65536 00:11:30.106 }, 00:11:30.106 { 00:11:30.106 "name": null, 00:11:30.106 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:30.106 "is_configured": false, 00:11:30.106 "data_offset": 0, 00:11:30.106 "data_size": 65536 00:11:30.106 }, 00:11:30.106 { 00:11:30.106 "name": "BaseBdev3", 00:11:30.106 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:30.106 "is_configured": true, 00:11:30.106 "data_offset": 0, 00:11:30.106 "data_size": 65536 00:11:30.106 } 00:11:30.106 ] 00:11:30.106 }' 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.106 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.680 [2024-11-26 06:21:14.606399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.680 "name": "Existed_Raid", 00:11:30.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.680 "strip_size_kb": 64, 00:11:30.680 "state": "configuring", 00:11:30.680 "raid_level": "raid0", 00:11:30.680 "superblock": false, 00:11:30.680 "num_base_bdevs": 3, 00:11:30.680 "num_base_bdevs_discovered": 2, 00:11:30.680 "num_base_bdevs_operational": 3, 00:11:30.680 "base_bdevs_list": [ 00:11:30.680 { 00:11:30.680 "name": null, 00:11:30.680 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:30.680 "is_configured": false, 00:11:30.680 "data_offset": 0, 00:11:30.680 "data_size": 65536 00:11:30.680 }, 00:11:30.680 { 00:11:30.680 "name": "BaseBdev2", 00:11:30.680 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:30.680 "is_configured": true, 00:11:30.680 "data_offset": 0, 00:11:30.680 "data_size": 65536 00:11:30.680 }, 00:11:30.680 { 00:11:30.680 "name": "BaseBdev3", 00:11:30.680 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:30.680 "is_configured": true, 00:11:30.680 "data_offset": 0, 00:11:30.680 "data_size": 65536 00:11:30.680 } 00:11:30.680 ] 00:11:30.680 }' 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.680 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.940 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.940 06:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.940 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.940 06:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.940 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9ab28498-b712-4082-9c78-c60b8d3099a8 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.199 [2024-11-26 06:21:15.137189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:31.199 [2024-11-26 06:21:15.137379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:31.199 [2024-11-26 06:21:15.137397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:31.199 [2024-11-26 06:21:15.137755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:31.199 [2024-11-26 06:21:15.137948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:31.199 [2024-11-26 06:21:15.137960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:31.199 [2024-11-26 06:21:15.138285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.199 NewBaseBdev 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.199 [ 00:11:31.199 { 00:11:31.199 "name": "NewBaseBdev", 00:11:31.199 "aliases": [ 00:11:31.199 "9ab28498-b712-4082-9c78-c60b8d3099a8" 00:11:31.199 ], 00:11:31.199 "product_name": "Malloc disk", 00:11:31.199 "block_size": 512, 00:11:31.199 "num_blocks": 65536, 00:11:31.199 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:31.199 "assigned_rate_limits": { 00:11:31.199 "rw_ios_per_sec": 0, 00:11:31.199 "rw_mbytes_per_sec": 0, 00:11:31.199 "r_mbytes_per_sec": 0, 00:11:31.199 "w_mbytes_per_sec": 0 00:11:31.199 }, 00:11:31.199 "claimed": true, 00:11:31.199 "claim_type": "exclusive_write", 00:11:31.199 "zoned": false, 00:11:31.199 "supported_io_types": { 00:11:31.199 "read": true, 00:11:31.199 "write": true, 00:11:31.199 "unmap": true, 00:11:31.199 "flush": true, 00:11:31.199 "reset": true, 00:11:31.199 "nvme_admin": false, 00:11:31.199 "nvme_io": false, 00:11:31.199 "nvme_io_md": false, 00:11:31.199 "write_zeroes": true, 00:11:31.199 "zcopy": true, 00:11:31.199 "get_zone_info": false, 00:11:31.199 "zone_management": false, 00:11:31.199 "zone_append": false, 00:11:31.199 "compare": false, 00:11:31.199 "compare_and_write": false, 00:11:31.199 "abort": true, 00:11:31.199 "seek_hole": false, 00:11:31.199 "seek_data": false, 00:11:31.199 "copy": true, 00:11:31.199 "nvme_iov_md": false 00:11:31.199 }, 00:11:31.199 "memory_domains": [ 00:11:31.199 { 00:11:31.199 "dma_device_id": "system", 00:11:31.199 "dma_device_type": 1 00:11:31.199 }, 00:11:31.199 { 00:11:31.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.199 "dma_device_type": 2 00:11:31.199 } 00:11:31.199 ], 00:11:31.199 "driver_specific": {} 00:11:31.199 } 00:11:31.199 ] 00:11:31.199 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.200 "name": "Existed_Raid", 00:11:31.200 "uuid": "2bdec0c0-2ee5-42da-b16e-47c96a655467", 00:11:31.200 "strip_size_kb": 64, 00:11:31.200 "state": "online", 00:11:31.200 "raid_level": "raid0", 00:11:31.200 "superblock": false, 00:11:31.200 "num_base_bdevs": 3, 00:11:31.200 "num_base_bdevs_discovered": 3, 00:11:31.200 "num_base_bdevs_operational": 3, 00:11:31.200 "base_bdevs_list": [ 00:11:31.200 { 00:11:31.200 "name": "NewBaseBdev", 00:11:31.200 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:31.200 "is_configured": true, 00:11:31.200 "data_offset": 0, 00:11:31.200 "data_size": 65536 00:11:31.200 }, 00:11:31.200 { 00:11:31.200 "name": "BaseBdev2", 00:11:31.200 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:31.200 "is_configured": true, 00:11:31.200 "data_offset": 0, 00:11:31.200 "data_size": 65536 00:11:31.200 }, 00:11:31.200 { 00:11:31.200 "name": "BaseBdev3", 00:11:31.200 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:31.200 "is_configured": true, 00:11:31.200 "data_offset": 0, 00:11:31.200 "data_size": 65536 00:11:31.200 } 00:11:31.200 ] 00:11:31.200 }' 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.200 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.458 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:31.458 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:31.458 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:31.458 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:31.458 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:31.459 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:31.459 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:31.717 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:31.717 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.718 [2024-11-26 06:21:15.596886] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:31.718 "name": "Existed_Raid", 00:11:31.718 "aliases": [ 00:11:31.718 "2bdec0c0-2ee5-42da-b16e-47c96a655467" 00:11:31.718 ], 00:11:31.718 "product_name": "Raid Volume", 00:11:31.718 "block_size": 512, 00:11:31.718 "num_blocks": 196608, 00:11:31.718 "uuid": "2bdec0c0-2ee5-42da-b16e-47c96a655467", 00:11:31.718 "assigned_rate_limits": { 00:11:31.718 "rw_ios_per_sec": 0, 00:11:31.718 "rw_mbytes_per_sec": 0, 00:11:31.718 "r_mbytes_per_sec": 0, 00:11:31.718 "w_mbytes_per_sec": 0 00:11:31.718 }, 00:11:31.718 "claimed": false, 00:11:31.718 "zoned": false, 00:11:31.718 "supported_io_types": { 00:11:31.718 "read": true, 00:11:31.718 "write": true, 00:11:31.718 "unmap": true, 00:11:31.718 "flush": true, 00:11:31.718 "reset": true, 00:11:31.718 "nvme_admin": false, 00:11:31.718 "nvme_io": false, 00:11:31.718 "nvme_io_md": false, 00:11:31.718 "write_zeroes": true, 00:11:31.718 "zcopy": false, 00:11:31.718 "get_zone_info": false, 00:11:31.718 "zone_management": false, 00:11:31.718 "zone_append": false, 00:11:31.718 "compare": false, 00:11:31.718 "compare_and_write": false, 00:11:31.718 "abort": false, 00:11:31.718 "seek_hole": false, 00:11:31.718 "seek_data": false, 00:11:31.718 "copy": false, 00:11:31.718 "nvme_iov_md": false 00:11:31.718 }, 00:11:31.718 "memory_domains": [ 00:11:31.718 { 00:11:31.718 "dma_device_id": "system", 00:11:31.718 "dma_device_type": 1 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.718 "dma_device_type": 2 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "dma_device_id": "system", 00:11:31.718 "dma_device_type": 1 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.718 "dma_device_type": 2 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "dma_device_id": "system", 00:11:31.718 "dma_device_type": 1 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.718 "dma_device_type": 2 00:11:31.718 } 00:11:31.718 ], 00:11:31.718 "driver_specific": { 00:11:31.718 "raid": { 00:11:31.718 "uuid": "2bdec0c0-2ee5-42da-b16e-47c96a655467", 00:11:31.718 "strip_size_kb": 64, 00:11:31.718 "state": "online", 00:11:31.718 "raid_level": "raid0", 00:11:31.718 "superblock": false, 00:11:31.718 "num_base_bdevs": 3, 00:11:31.718 "num_base_bdevs_discovered": 3, 00:11:31.718 "num_base_bdevs_operational": 3, 00:11:31.718 "base_bdevs_list": [ 00:11:31.718 { 00:11:31.718 "name": "NewBaseBdev", 00:11:31.718 "uuid": "9ab28498-b712-4082-9c78-c60b8d3099a8", 00:11:31.718 "is_configured": true, 00:11:31.718 "data_offset": 0, 00:11:31.718 "data_size": 65536 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "name": "BaseBdev2", 00:11:31.718 "uuid": "2a6d210e-20fc-440f-8f63-e27634bd7ac7", 00:11:31.718 "is_configured": true, 00:11:31.718 "data_offset": 0, 00:11:31.718 "data_size": 65536 00:11:31.718 }, 00:11:31.718 { 00:11:31.718 "name": "BaseBdev3", 00:11:31.718 "uuid": "fd5323eb-3ed1-4a76-8659-845f751365bb", 00:11:31.718 "is_configured": true, 00:11:31.718 "data_offset": 0, 00:11:31.718 "data_size": 65536 00:11:31.718 } 00:11:31.718 ] 00:11:31.718 } 00:11:31.718 } 00:11:31.718 }' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:31.718 BaseBdev2 00:11:31.718 BaseBdev3' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.718 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.978 [2024-11-26 06:21:15.868092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.978 [2024-11-26 06:21:15.868196] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.978 [2024-11-26 06:21:15.868386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.978 [2024-11-26 06:21:15.868548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.978 [2024-11-26 06:21:15.868627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64206 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64206 ']' 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64206 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64206 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64206' 00:11:31.978 killing process with pid 64206 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64206 00:11:31.978 [2024-11-26 06:21:15.913586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.978 06:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64206 00:11:32.237 [2024-11-26 06:21:16.265362] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.613 ************************************ 00:11:33.613 END TEST raid_state_function_test 00:11:33.613 ************************************ 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:33.613 00:11:33.613 real 0m11.270s 00:11:33.613 user 0m17.528s 00:11:33.613 sys 0m2.144s 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 06:21:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:11:33.613 06:21:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:33.613 06:21:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.613 06:21:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:33.613 ************************************ 00:11:33.613 START TEST raid_state_function_test_sb 00:11:33.613 ************************************ 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64838 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64838' 00:11:33.613 Process raid pid: 64838 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64838 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64838 ']' 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.613 06:21:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.614 [2024-11-26 06:21:17.720767] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:33.614 [2024-11-26 06:21:17.721030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.873 [2024-11-26 06:21:17.907193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.131 [2024-11-26 06:21:18.051534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.390 [2024-11-26 06:21:18.303509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.390 [2024-11-26 06:21:18.303686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.650 [2024-11-26 06:21:18.623403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.650 [2024-11-26 06:21:18.623539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.650 [2024-11-26 06:21:18.623574] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.650 [2024-11-26 06:21:18.623599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.650 [2024-11-26 06:21:18.623619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.650 [2024-11-26 06:21:18.623641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.650 "name": "Existed_Raid", 00:11:34.650 "uuid": "13840658-f5d4-4f71-a2b6-7cad6b9134e7", 00:11:34.650 "strip_size_kb": 64, 00:11:34.650 "state": "configuring", 00:11:34.650 "raid_level": "raid0", 00:11:34.650 "superblock": true, 00:11:34.650 "num_base_bdevs": 3, 00:11:34.650 "num_base_bdevs_discovered": 0, 00:11:34.650 "num_base_bdevs_operational": 3, 00:11:34.650 "base_bdevs_list": [ 00:11:34.650 { 00:11:34.650 "name": "BaseBdev1", 00:11:34.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.650 "is_configured": false, 00:11:34.650 "data_offset": 0, 00:11:34.650 "data_size": 0 00:11:34.650 }, 00:11:34.650 { 00:11:34.650 "name": "BaseBdev2", 00:11:34.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.650 "is_configured": false, 00:11:34.650 "data_offset": 0, 00:11:34.650 "data_size": 0 00:11:34.650 }, 00:11:34.650 { 00:11:34.650 "name": "BaseBdev3", 00:11:34.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.650 "is_configured": false, 00:11:34.650 "data_offset": 0, 00:11:34.650 "data_size": 0 00:11:34.650 } 00:11:34.650 ] 00:11:34.650 }' 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.650 06:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 [2024-11-26 06:21:19.058634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.224 [2024-11-26 06:21:19.058686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.224 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 [2024-11-26 06:21:19.066626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.224 [2024-11-26 06:21:19.066684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.224 [2024-11-26 06:21:19.066696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.225 [2024-11-26 06:21:19.066707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.225 [2024-11-26 06:21:19.066715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.225 [2024-11-26 06:21:19.066726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.225 [2024-11-26 06:21:19.121982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.225 BaseBdev1 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.225 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.225 [ 00:11:35.225 { 00:11:35.225 "name": "BaseBdev1", 00:11:35.226 "aliases": [ 00:11:35.226 "e121f8b1-c22f-498a-a62f-cb49e54ac654" 00:11:35.226 ], 00:11:35.226 "product_name": "Malloc disk", 00:11:35.226 "block_size": 512, 00:11:35.226 "num_blocks": 65536, 00:11:35.226 "uuid": "e121f8b1-c22f-498a-a62f-cb49e54ac654", 00:11:35.226 "assigned_rate_limits": { 00:11:35.226 "rw_ios_per_sec": 0, 00:11:35.226 "rw_mbytes_per_sec": 0, 00:11:35.226 "r_mbytes_per_sec": 0, 00:11:35.226 "w_mbytes_per_sec": 0 00:11:35.226 }, 00:11:35.226 "claimed": true, 00:11:35.226 "claim_type": "exclusive_write", 00:11:35.226 "zoned": false, 00:11:35.226 "supported_io_types": { 00:11:35.226 "read": true, 00:11:35.226 "write": true, 00:11:35.226 "unmap": true, 00:11:35.226 "flush": true, 00:11:35.226 "reset": true, 00:11:35.226 "nvme_admin": false, 00:11:35.226 "nvme_io": false, 00:11:35.226 "nvme_io_md": false, 00:11:35.226 "write_zeroes": true, 00:11:35.226 "zcopy": true, 00:11:35.226 "get_zone_info": false, 00:11:35.226 "zone_management": false, 00:11:35.226 "zone_append": false, 00:11:35.226 "compare": false, 00:11:35.226 "compare_and_write": false, 00:11:35.226 "abort": true, 00:11:35.226 "seek_hole": false, 00:11:35.226 "seek_data": false, 00:11:35.226 "copy": true, 00:11:35.226 "nvme_iov_md": false 00:11:35.226 }, 00:11:35.226 "memory_domains": [ 00:11:35.226 { 00:11:35.226 "dma_device_id": "system", 00:11:35.226 "dma_device_type": 1 00:11:35.226 }, 00:11:35.226 { 00:11:35.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.226 "dma_device_type": 2 00:11:35.226 } 00:11:35.226 ], 00:11:35.226 "driver_specific": {} 00:11:35.226 } 00:11:35.226 ] 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.227 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.227 "name": "Existed_Raid", 00:11:35.227 "uuid": "c12ecc67-a9b5-45ed-9b92-bd8e6b588b65", 00:11:35.227 "strip_size_kb": 64, 00:11:35.227 "state": "configuring", 00:11:35.227 "raid_level": "raid0", 00:11:35.227 "superblock": true, 00:11:35.227 "num_base_bdevs": 3, 00:11:35.227 "num_base_bdevs_discovered": 1, 00:11:35.227 "num_base_bdevs_operational": 3, 00:11:35.227 "base_bdevs_list": [ 00:11:35.227 { 00:11:35.227 "name": "BaseBdev1", 00:11:35.227 "uuid": "e121f8b1-c22f-498a-a62f-cb49e54ac654", 00:11:35.227 "is_configured": true, 00:11:35.227 "data_offset": 2048, 00:11:35.227 "data_size": 63488 00:11:35.227 }, 00:11:35.227 { 00:11:35.227 "name": "BaseBdev2", 00:11:35.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.227 "is_configured": false, 00:11:35.227 "data_offset": 0, 00:11:35.227 "data_size": 0 00:11:35.227 }, 00:11:35.227 { 00:11:35.227 "name": "BaseBdev3", 00:11:35.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.227 "is_configured": false, 00:11:35.227 "data_offset": 0, 00:11:35.228 "data_size": 0 00:11:35.228 } 00:11:35.228 ] 00:11:35.228 }' 00:11:35.228 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.228 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.491 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.491 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.491 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.491 [2024-11-26 06:21:19.617231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.491 [2024-11-26 06:21:19.617357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:35.491 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.750 [2024-11-26 06:21:19.629264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.750 [2024-11-26 06:21:19.631506] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.750 [2024-11-26 06:21:19.631569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.750 [2024-11-26 06:21:19.631581] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.750 [2024-11-26 06:21:19.631591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.750 "name": "Existed_Raid", 00:11:35.750 "uuid": "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f", 00:11:35.750 "strip_size_kb": 64, 00:11:35.750 "state": "configuring", 00:11:35.750 "raid_level": "raid0", 00:11:35.750 "superblock": true, 00:11:35.750 "num_base_bdevs": 3, 00:11:35.750 "num_base_bdevs_discovered": 1, 00:11:35.750 "num_base_bdevs_operational": 3, 00:11:35.750 "base_bdevs_list": [ 00:11:35.750 { 00:11:35.750 "name": "BaseBdev1", 00:11:35.750 "uuid": "e121f8b1-c22f-498a-a62f-cb49e54ac654", 00:11:35.750 "is_configured": true, 00:11:35.750 "data_offset": 2048, 00:11:35.750 "data_size": 63488 00:11:35.750 }, 00:11:35.750 { 00:11:35.750 "name": "BaseBdev2", 00:11:35.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.750 "is_configured": false, 00:11:35.750 "data_offset": 0, 00:11:35.750 "data_size": 0 00:11:35.750 }, 00:11:35.750 { 00:11:35.750 "name": "BaseBdev3", 00:11:35.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.750 "is_configured": false, 00:11:35.750 "data_offset": 0, 00:11:35.750 "data_size": 0 00:11:35.750 } 00:11:35.750 ] 00:11:35.750 }' 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.750 06:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.009 [2024-11-26 06:21:20.131040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.009 BaseBdev2 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.009 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 [ 00:11:36.268 { 00:11:36.268 "name": "BaseBdev2", 00:11:36.268 "aliases": [ 00:11:36.268 "b9eb6f5c-9679-462d-905f-26f1e360c38b" 00:11:36.268 ], 00:11:36.268 "product_name": "Malloc disk", 00:11:36.268 "block_size": 512, 00:11:36.268 "num_blocks": 65536, 00:11:36.268 "uuid": "b9eb6f5c-9679-462d-905f-26f1e360c38b", 00:11:36.268 "assigned_rate_limits": { 00:11:36.268 "rw_ios_per_sec": 0, 00:11:36.268 "rw_mbytes_per_sec": 0, 00:11:36.268 "r_mbytes_per_sec": 0, 00:11:36.268 "w_mbytes_per_sec": 0 00:11:36.268 }, 00:11:36.268 "claimed": true, 00:11:36.268 "claim_type": "exclusive_write", 00:11:36.268 "zoned": false, 00:11:36.268 "supported_io_types": { 00:11:36.268 "read": true, 00:11:36.268 "write": true, 00:11:36.268 "unmap": true, 00:11:36.268 "flush": true, 00:11:36.268 "reset": true, 00:11:36.268 "nvme_admin": false, 00:11:36.268 "nvme_io": false, 00:11:36.268 "nvme_io_md": false, 00:11:36.268 "write_zeroes": true, 00:11:36.268 "zcopy": true, 00:11:36.268 "get_zone_info": false, 00:11:36.268 "zone_management": false, 00:11:36.268 "zone_append": false, 00:11:36.268 "compare": false, 00:11:36.268 "compare_and_write": false, 00:11:36.268 "abort": true, 00:11:36.268 "seek_hole": false, 00:11:36.268 "seek_data": false, 00:11:36.268 "copy": true, 00:11:36.268 "nvme_iov_md": false 00:11:36.268 }, 00:11:36.268 "memory_domains": [ 00:11:36.268 { 00:11:36.268 "dma_device_id": "system", 00:11:36.268 "dma_device_type": 1 00:11:36.268 }, 00:11:36.268 { 00:11:36.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.268 "dma_device_type": 2 00:11:36.268 } 00:11:36.268 ], 00:11:36.268 "driver_specific": {} 00:11:36.268 } 00:11:36.268 ] 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.268 "name": "Existed_Raid", 00:11:36.268 "uuid": "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f", 00:11:36.268 "strip_size_kb": 64, 00:11:36.268 "state": "configuring", 00:11:36.268 "raid_level": "raid0", 00:11:36.268 "superblock": true, 00:11:36.268 "num_base_bdevs": 3, 00:11:36.268 "num_base_bdevs_discovered": 2, 00:11:36.268 "num_base_bdevs_operational": 3, 00:11:36.268 "base_bdevs_list": [ 00:11:36.268 { 00:11:36.268 "name": "BaseBdev1", 00:11:36.268 "uuid": "e121f8b1-c22f-498a-a62f-cb49e54ac654", 00:11:36.268 "is_configured": true, 00:11:36.268 "data_offset": 2048, 00:11:36.268 "data_size": 63488 00:11:36.268 }, 00:11:36.268 { 00:11:36.268 "name": "BaseBdev2", 00:11:36.268 "uuid": "b9eb6f5c-9679-462d-905f-26f1e360c38b", 00:11:36.268 "is_configured": true, 00:11:36.268 "data_offset": 2048, 00:11:36.268 "data_size": 63488 00:11:36.268 }, 00:11:36.268 { 00:11:36.268 "name": "BaseBdev3", 00:11:36.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.268 "is_configured": false, 00:11:36.268 "data_offset": 0, 00:11:36.268 "data_size": 0 00:11:36.268 } 00:11:36.268 ] 00:11:36.268 }' 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.268 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 BaseBdev3 00:11:36.837 [2024-11-26 06:21:20.722295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.837 [2024-11-26 06:21:20.722610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.837 [2024-11-26 06:21:20.722639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:36.837 [2024-11-26 06:21:20.722991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:36.837 [2024-11-26 06:21:20.723232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.837 [2024-11-26 06:21:20.723247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:36.837 [2024-11-26 06:21:20.723512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.837 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.837 [ 00:11:36.837 { 00:11:36.837 "name": "BaseBdev3", 00:11:36.837 "aliases": [ 00:11:36.838 "2e696ff8-887b-47ba-bab9-d424480fd737" 00:11:36.838 ], 00:11:36.838 "product_name": "Malloc disk", 00:11:36.838 "block_size": 512, 00:11:36.838 "num_blocks": 65536, 00:11:36.838 "uuid": "2e696ff8-887b-47ba-bab9-d424480fd737", 00:11:36.838 "assigned_rate_limits": { 00:11:36.838 "rw_ios_per_sec": 0, 00:11:36.838 "rw_mbytes_per_sec": 0, 00:11:36.838 "r_mbytes_per_sec": 0, 00:11:36.838 "w_mbytes_per_sec": 0 00:11:36.838 }, 00:11:36.838 "claimed": true, 00:11:36.838 "claim_type": "exclusive_write", 00:11:36.838 "zoned": false, 00:11:36.838 "supported_io_types": { 00:11:36.838 "read": true, 00:11:36.838 "write": true, 00:11:36.838 "unmap": true, 00:11:36.838 "flush": true, 00:11:36.838 "reset": true, 00:11:36.838 "nvme_admin": false, 00:11:36.838 "nvme_io": false, 00:11:36.838 "nvme_io_md": false, 00:11:36.838 "write_zeroes": true, 00:11:36.838 "zcopy": true, 00:11:36.838 "get_zone_info": false, 00:11:36.838 "zone_management": false, 00:11:36.838 "zone_append": false, 00:11:36.838 "compare": false, 00:11:36.838 "compare_and_write": false, 00:11:36.838 "abort": true, 00:11:36.838 "seek_hole": false, 00:11:36.838 "seek_data": false, 00:11:36.838 "copy": true, 00:11:36.838 "nvme_iov_md": false 00:11:36.838 }, 00:11:36.838 "memory_domains": [ 00:11:36.838 { 00:11:36.838 "dma_device_id": "system", 00:11:36.838 "dma_device_type": 1 00:11:36.838 }, 00:11:36.838 { 00:11:36.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.838 "dma_device_type": 2 00:11:36.838 } 00:11:36.838 ], 00:11:36.838 "driver_specific": {} 00:11:36.838 } 00:11:36.838 ] 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.838 "name": "Existed_Raid", 00:11:36.838 "uuid": "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f", 00:11:36.838 "strip_size_kb": 64, 00:11:36.838 "state": "online", 00:11:36.838 "raid_level": "raid0", 00:11:36.838 "superblock": true, 00:11:36.838 "num_base_bdevs": 3, 00:11:36.838 "num_base_bdevs_discovered": 3, 00:11:36.838 "num_base_bdevs_operational": 3, 00:11:36.838 "base_bdevs_list": [ 00:11:36.838 { 00:11:36.838 "name": "BaseBdev1", 00:11:36.838 "uuid": "e121f8b1-c22f-498a-a62f-cb49e54ac654", 00:11:36.838 "is_configured": true, 00:11:36.838 "data_offset": 2048, 00:11:36.838 "data_size": 63488 00:11:36.838 }, 00:11:36.838 { 00:11:36.838 "name": "BaseBdev2", 00:11:36.838 "uuid": "b9eb6f5c-9679-462d-905f-26f1e360c38b", 00:11:36.838 "is_configured": true, 00:11:36.838 "data_offset": 2048, 00:11:36.838 "data_size": 63488 00:11:36.838 }, 00:11:36.838 { 00:11:36.838 "name": "BaseBdev3", 00:11:36.838 "uuid": "2e696ff8-887b-47ba-bab9-d424480fd737", 00:11:36.838 "is_configured": true, 00:11:36.838 "data_offset": 2048, 00:11:36.838 "data_size": 63488 00:11:36.838 } 00:11:36.838 ] 00:11:36.838 }' 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.838 06:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.096 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.096 [2024-11-26 06:21:21.225989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.369 "name": "Existed_Raid", 00:11:37.369 "aliases": [ 00:11:37.369 "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f" 00:11:37.369 ], 00:11:37.369 "product_name": "Raid Volume", 00:11:37.369 "block_size": 512, 00:11:37.369 "num_blocks": 190464, 00:11:37.369 "uuid": "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f", 00:11:37.369 "assigned_rate_limits": { 00:11:37.369 "rw_ios_per_sec": 0, 00:11:37.369 "rw_mbytes_per_sec": 0, 00:11:37.369 "r_mbytes_per_sec": 0, 00:11:37.369 "w_mbytes_per_sec": 0 00:11:37.369 }, 00:11:37.369 "claimed": false, 00:11:37.369 "zoned": false, 00:11:37.369 "supported_io_types": { 00:11:37.369 "read": true, 00:11:37.369 "write": true, 00:11:37.369 "unmap": true, 00:11:37.369 "flush": true, 00:11:37.369 "reset": true, 00:11:37.369 "nvme_admin": false, 00:11:37.369 "nvme_io": false, 00:11:37.369 "nvme_io_md": false, 00:11:37.369 "write_zeroes": true, 00:11:37.369 "zcopy": false, 00:11:37.369 "get_zone_info": false, 00:11:37.369 "zone_management": false, 00:11:37.369 "zone_append": false, 00:11:37.369 "compare": false, 00:11:37.369 "compare_and_write": false, 00:11:37.369 "abort": false, 00:11:37.369 "seek_hole": false, 00:11:37.369 "seek_data": false, 00:11:37.369 "copy": false, 00:11:37.369 "nvme_iov_md": false 00:11:37.369 }, 00:11:37.369 "memory_domains": [ 00:11:37.369 { 00:11:37.369 "dma_device_id": "system", 00:11:37.369 "dma_device_type": 1 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.369 "dma_device_type": 2 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "dma_device_id": "system", 00:11:37.369 "dma_device_type": 1 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.369 "dma_device_type": 2 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "dma_device_id": "system", 00:11:37.369 "dma_device_type": 1 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.369 "dma_device_type": 2 00:11:37.369 } 00:11:37.369 ], 00:11:37.369 "driver_specific": { 00:11:37.369 "raid": { 00:11:37.369 "uuid": "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f", 00:11:37.369 "strip_size_kb": 64, 00:11:37.369 "state": "online", 00:11:37.369 "raid_level": "raid0", 00:11:37.369 "superblock": true, 00:11:37.369 "num_base_bdevs": 3, 00:11:37.369 "num_base_bdevs_discovered": 3, 00:11:37.369 "num_base_bdevs_operational": 3, 00:11:37.369 "base_bdevs_list": [ 00:11:37.369 { 00:11:37.369 "name": "BaseBdev1", 00:11:37.369 "uuid": "e121f8b1-c22f-498a-a62f-cb49e54ac654", 00:11:37.369 "is_configured": true, 00:11:37.369 "data_offset": 2048, 00:11:37.369 "data_size": 63488 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "name": "BaseBdev2", 00:11:37.369 "uuid": "b9eb6f5c-9679-462d-905f-26f1e360c38b", 00:11:37.369 "is_configured": true, 00:11:37.369 "data_offset": 2048, 00:11:37.369 "data_size": 63488 00:11:37.369 }, 00:11:37.369 { 00:11:37.369 "name": "BaseBdev3", 00:11:37.369 "uuid": "2e696ff8-887b-47ba-bab9-d424480fd737", 00:11:37.369 "is_configured": true, 00:11:37.369 "data_offset": 2048, 00:11:37.369 "data_size": 63488 00:11:37.369 } 00:11:37.369 ] 00:11:37.369 } 00:11:37.369 } 00:11:37.369 }' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:37.369 BaseBdev2 00:11:37.369 BaseBdev3' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.369 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.370 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.370 [2024-11-26 06:21:21.461235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.370 [2024-11-26 06:21:21.461269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.370 [2024-11-26 06:21:21.461352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.629 "name": "Existed_Raid", 00:11:37.629 "uuid": "fdbe0256-7308-42e5-8c4e-ac4dc8ae136f", 00:11:37.629 "strip_size_kb": 64, 00:11:37.629 "state": "offline", 00:11:37.629 "raid_level": "raid0", 00:11:37.629 "superblock": true, 00:11:37.629 "num_base_bdevs": 3, 00:11:37.629 "num_base_bdevs_discovered": 2, 00:11:37.629 "num_base_bdevs_operational": 2, 00:11:37.629 "base_bdevs_list": [ 00:11:37.629 { 00:11:37.629 "name": null, 00:11:37.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.629 "is_configured": false, 00:11:37.629 "data_offset": 0, 00:11:37.629 "data_size": 63488 00:11:37.629 }, 00:11:37.629 { 00:11:37.629 "name": "BaseBdev2", 00:11:37.629 "uuid": "b9eb6f5c-9679-462d-905f-26f1e360c38b", 00:11:37.629 "is_configured": true, 00:11:37.629 "data_offset": 2048, 00:11:37.629 "data_size": 63488 00:11:37.629 }, 00:11:37.629 { 00:11:37.629 "name": "BaseBdev3", 00:11:37.629 "uuid": "2e696ff8-887b-47ba-bab9-d424480fd737", 00:11:37.629 "is_configured": true, 00:11:37.629 "data_offset": 2048, 00:11:37.629 "data_size": 63488 00:11:37.629 } 00:11:37.629 ] 00:11:37.629 }' 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.629 06:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.887 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:37.887 06:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:37.887 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:37.887 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.887 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.887 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.147 [2024-11-26 06:21:22.034211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.147 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.147 [2024-11-26 06:21:22.208332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.147 [2024-11-26 06:21:22.208450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 BaseBdev2 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 [ 00:11:38.406 { 00:11:38.406 "name": "BaseBdev2", 00:11:38.406 "aliases": [ 00:11:38.406 "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224" 00:11:38.406 ], 00:11:38.406 "product_name": "Malloc disk", 00:11:38.406 "block_size": 512, 00:11:38.406 "num_blocks": 65536, 00:11:38.406 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:38.406 "assigned_rate_limits": { 00:11:38.406 "rw_ios_per_sec": 0, 00:11:38.406 "rw_mbytes_per_sec": 0, 00:11:38.406 "r_mbytes_per_sec": 0, 00:11:38.406 "w_mbytes_per_sec": 0 00:11:38.406 }, 00:11:38.406 "claimed": false, 00:11:38.406 "zoned": false, 00:11:38.406 "supported_io_types": { 00:11:38.406 "read": true, 00:11:38.406 "write": true, 00:11:38.406 "unmap": true, 00:11:38.406 "flush": true, 00:11:38.406 "reset": true, 00:11:38.406 "nvme_admin": false, 00:11:38.406 "nvme_io": false, 00:11:38.406 "nvme_io_md": false, 00:11:38.406 "write_zeroes": true, 00:11:38.406 "zcopy": true, 00:11:38.406 "get_zone_info": false, 00:11:38.406 "zone_management": false, 00:11:38.406 "zone_append": false, 00:11:38.406 "compare": false, 00:11:38.406 "compare_and_write": false, 00:11:38.406 "abort": true, 00:11:38.406 "seek_hole": false, 00:11:38.406 "seek_data": false, 00:11:38.406 "copy": true, 00:11:38.406 "nvme_iov_md": false 00:11:38.406 }, 00:11:38.406 "memory_domains": [ 00:11:38.406 { 00:11:38.406 "dma_device_id": "system", 00:11:38.406 "dma_device_type": 1 00:11:38.406 }, 00:11:38.406 { 00:11:38.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.406 "dma_device_type": 2 00:11:38.406 } 00:11:38.406 ], 00:11:38.406 "driver_specific": {} 00:11:38.406 } 00:11:38.406 ] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 BaseBdev3 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.406 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.406 [ 00:11:38.406 { 00:11:38.406 "name": "BaseBdev3", 00:11:38.406 "aliases": [ 00:11:38.406 "76063387-bf17-49ea-9e59-28525c4b2da6" 00:11:38.406 ], 00:11:38.406 "product_name": "Malloc disk", 00:11:38.406 "block_size": 512, 00:11:38.406 "num_blocks": 65536, 00:11:38.406 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:38.406 "assigned_rate_limits": { 00:11:38.406 "rw_ios_per_sec": 0, 00:11:38.406 "rw_mbytes_per_sec": 0, 00:11:38.406 "r_mbytes_per_sec": 0, 00:11:38.406 "w_mbytes_per_sec": 0 00:11:38.406 }, 00:11:38.406 "claimed": false, 00:11:38.406 "zoned": false, 00:11:38.406 "supported_io_types": { 00:11:38.406 "read": true, 00:11:38.406 "write": true, 00:11:38.406 "unmap": true, 00:11:38.406 "flush": true, 00:11:38.406 "reset": true, 00:11:38.406 "nvme_admin": false, 00:11:38.406 "nvme_io": false, 00:11:38.406 "nvme_io_md": false, 00:11:38.406 "write_zeroes": true, 00:11:38.406 "zcopy": true, 00:11:38.406 "get_zone_info": false, 00:11:38.406 "zone_management": false, 00:11:38.406 "zone_append": false, 00:11:38.406 "compare": false, 00:11:38.406 "compare_and_write": false, 00:11:38.406 "abort": true, 00:11:38.406 "seek_hole": false, 00:11:38.665 "seek_data": false, 00:11:38.665 "copy": true, 00:11:38.665 "nvme_iov_md": false 00:11:38.665 }, 00:11:38.665 "memory_domains": [ 00:11:38.665 { 00:11:38.665 "dma_device_id": "system", 00:11:38.665 "dma_device_type": 1 00:11:38.665 }, 00:11:38.665 { 00:11:38.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.665 "dma_device_type": 2 00:11:38.665 } 00:11:38.665 ], 00:11:38.665 "driver_specific": {} 00:11:38.665 } 00:11:38.665 ] 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.665 [2024-11-26 06:21:22.548323] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:38.665 [2024-11-26 06:21:22.548456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:38.665 [2024-11-26 06:21:22.548547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.665 [2024-11-26 06:21:22.551288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.665 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.666 "name": "Existed_Raid", 00:11:38.666 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:38.666 "strip_size_kb": 64, 00:11:38.666 "state": "configuring", 00:11:38.666 "raid_level": "raid0", 00:11:38.666 "superblock": true, 00:11:38.666 "num_base_bdevs": 3, 00:11:38.666 "num_base_bdevs_discovered": 2, 00:11:38.666 "num_base_bdevs_operational": 3, 00:11:38.666 "base_bdevs_list": [ 00:11:38.666 { 00:11:38.666 "name": "BaseBdev1", 00:11:38.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.666 "is_configured": false, 00:11:38.666 "data_offset": 0, 00:11:38.666 "data_size": 0 00:11:38.666 }, 00:11:38.666 { 00:11:38.666 "name": "BaseBdev2", 00:11:38.666 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:38.666 "is_configured": true, 00:11:38.666 "data_offset": 2048, 00:11:38.666 "data_size": 63488 00:11:38.666 }, 00:11:38.666 { 00:11:38.666 "name": "BaseBdev3", 00:11:38.666 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:38.666 "is_configured": true, 00:11:38.666 "data_offset": 2048, 00:11:38.666 "data_size": 63488 00:11:38.666 } 00:11:38.666 ] 00:11:38.666 }' 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.666 06:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.924 [2024-11-26 06:21:23.035645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.924 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.925 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.925 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.184 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.184 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.184 "name": "Existed_Raid", 00:11:39.184 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:39.184 "strip_size_kb": 64, 00:11:39.184 "state": "configuring", 00:11:39.184 "raid_level": "raid0", 00:11:39.184 "superblock": true, 00:11:39.184 "num_base_bdevs": 3, 00:11:39.184 "num_base_bdevs_discovered": 1, 00:11:39.184 "num_base_bdevs_operational": 3, 00:11:39.184 "base_bdevs_list": [ 00:11:39.184 { 00:11:39.184 "name": "BaseBdev1", 00:11:39.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.184 "is_configured": false, 00:11:39.184 "data_offset": 0, 00:11:39.184 "data_size": 0 00:11:39.184 }, 00:11:39.184 { 00:11:39.184 "name": null, 00:11:39.184 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:39.184 "is_configured": false, 00:11:39.184 "data_offset": 0, 00:11:39.184 "data_size": 63488 00:11:39.184 }, 00:11:39.184 { 00:11:39.184 "name": "BaseBdev3", 00:11:39.184 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:39.184 "is_configured": true, 00:11:39.184 "data_offset": 2048, 00:11:39.184 "data_size": 63488 00:11:39.184 } 00:11:39.184 ] 00:11:39.184 }' 00:11:39.184 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.184 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.443 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.702 [2024-11-26 06:21:23.593262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.702 BaseBdev1 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.702 [ 00:11:39.702 { 00:11:39.702 "name": "BaseBdev1", 00:11:39.702 "aliases": [ 00:11:39.702 "a08c3381-c047-4169-8d83-cc7ca01c86c1" 00:11:39.702 ], 00:11:39.702 "product_name": "Malloc disk", 00:11:39.702 "block_size": 512, 00:11:39.702 "num_blocks": 65536, 00:11:39.702 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:39.702 "assigned_rate_limits": { 00:11:39.702 "rw_ios_per_sec": 0, 00:11:39.702 "rw_mbytes_per_sec": 0, 00:11:39.702 "r_mbytes_per_sec": 0, 00:11:39.702 "w_mbytes_per_sec": 0 00:11:39.702 }, 00:11:39.702 "claimed": true, 00:11:39.702 "claim_type": "exclusive_write", 00:11:39.702 "zoned": false, 00:11:39.702 "supported_io_types": { 00:11:39.702 "read": true, 00:11:39.702 "write": true, 00:11:39.702 "unmap": true, 00:11:39.702 "flush": true, 00:11:39.702 "reset": true, 00:11:39.702 "nvme_admin": false, 00:11:39.702 "nvme_io": false, 00:11:39.702 "nvme_io_md": false, 00:11:39.702 "write_zeroes": true, 00:11:39.702 "zcopy": true, 00:11:39.702 "get_zone_info": false, 00:11:39.702 "zone_management": false, 00:11:39.702 "zone_append": false, 00:11:39.702 "compare": false, 00:11:39.702 "compare_and_write": false, 00:11:39.702 "abort": true, 00:11:39.702 "seek_hole": false, 00:11:39.702 "seek_data": false, 00:11:39.702 "copy": true, 00:11:39.702 "nvme_iov_md": false 00:11:39.702 }, 00:11:39.702 "memory_domains": [ 00:11:39.702 { 00:11:39.702 "dma_device_id": "system", 00:11:39.702 "dma_device_type": 1 00:11:39.702 }, 00:11:39.702 { 00:11:39.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.702 "dma_device_type": 2 00:11:39.702 } 00:11:39.702 ], 00:11:39.702 "driver_specific": {} 00:11:39.702 } 00:11:39.702 ] 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.702 "name": "Existed_Raid", 00:11:39.702 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:39.702 "strip_size_kb": 64, 00:11:39.702 "state": "configuring", 00:11:39.702 "raid_level": "raid0", 00:11:39.702 "superblock": true, 00:11:39.702 "num_base_bdevs": 3, 00:11:39.702 "num_base_bdevs_discovered": 2, 00:11:39.702 "num_base_bdevs_operational": 3, 00:11:39.702 "base_bdevs_list": [ 00:11:39.702 { 00:11:39.702 "name": "BaseBdev1", 00:11:39.702 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:39.702 "is_configured": true, 00:11:39.702 "data_offset": 2048, 00:11:39.702 "data_size": 63488 00:11:39.702 }, 00:11:39.702 { 00:11:39.702 "name": null, 00:11:39.702 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:39.702 "is_configured": false, 00:11:39.702 "data_offset": 0, 00:11:39.702 "data_size": 63488 00:11:39.702 }, 00:11:39.702 { 00:11:39.702 "name": "BaseBdev3", 00:11:39.702 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:39.702 "is_configured": true, 00:11:39.702 "data_offset": 2048, 00:11:39.702 "data_size": 63488 00:11:39.702 } 00:11:39.702 ] 00:11:39.702 }' 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.702 06:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.960 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.960 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.960 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.960 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.219 [2024-11-26 06:21:24.108463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.219 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.220 "name": "Existed_Raid", 00:11:40.220 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:40.220 "strip_size_kb": 64, 00:11:40.220 "state": "configuring", 00:11:40.220 "raid_level": "raid0", 00:11:40.220 "superblock": true, 00:11:40.220 "num_base_bdevs": 3, 00:11:40.220 "num_base_bdevs_discovered": 1, 00:11:40.220 "num_base_bdevs_operational": 3, 00:11:40.220 "base_bdevs_list": [ 00:11:40.220 { 00:11:40.220 "name": "BaseBdev1", 00:11:40.220 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:40.220 "is_configured": true, 00:11:40.220 "data_offset": 2048, 00:11:40.220 "data_size": 63488 00:11:40.220 }, 00:11:40.220 { 00:11:40.220 "name": null, 00:11:40.220 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:40.220 "is_configured": false, 00:11:40.220 "data_offset": 0, 00:11:40.220 "data_size": 63488 00:11:40.220 }, 00:11:40.220 { 00:11:40.220 "name": null, 00:11:40.220 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:40.220 "is_configured": false, 00:11:40.220 "data_offset": 0, 00:11:40.220 "data_size": 63488 00:11:40.220 } 00:11:40.220 ] 00:11:40.220 }' 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.220 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.479 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.479 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.479 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 [2024-11-26 06:21:24.663562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.738 "name": "Existed_Raid", 00:11:40.738 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:40.738 "strip_size_kb": 64, 00:11:40.738 "state": "configuring", 00:11:40.738 "raid_level": "raid0", 00:11:40.738 "superblock": true, 00:11:40.738 "num_base_bdevs": 3, 00:11:40.738 "num_base_bdevs_discovered": 2, 00:11:40.738 "num_base_bdevs_operational": 3, 00:11:40.738 "base_bdevs_list": [ 00:11:40.738 { 00:11:40.738 "name": "BaseBdev1", 00:11:40.738 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:40.738 "is_configured": true, 00:11:40.738 "data_offset": 2048, 00:11:40.738 "data_size": 63488 00:11:40.738 }, 00:11:40.738 { 00:11:40.738 "name": null, 00:11:40.738 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:40.738 "is_configured": false, 00:11:40.738 "data_offset": 0, 00:11:40.738 "data_size": 63488 00:11:40.738 }, 00:11:40.738 { 00:11:40.738 "name": "BaseBdev3", 00:11:40.738 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:40.738 "is_configured": true, 00:11:40.738 "data_offset": 2048, 00:11:40.738 "data_size": 63488 00:11:40.738 } 00:11:40.738 ] 00:11:40.738 }' 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.738 06:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.996 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.254 [2024-11-26 06:21:25.130898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.254 "name": "Existed_Raid", 00:11:41.254 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:41.254 "strip_size_kb": 64, 00:11:41.254 "state": "configuring", 00:11:41.254 "raid_level": "raid0", 00:11:41.254 "superblock": true, 00:11:41.254 "num_base_bdevs": 3, 00:11:41.254 "num_base_bdevs_discovered": 1, 00:11:41.254 "num_base_bdevs_operational": 3, 00:11:41.254 "base_bdevs_list": [ 00:11:41.254 { 00:11:41.254 "name": null, 00:11:41.254 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:41.254 "is_configured": false, 00:11:41.254 "data_offset": 0, 00:11:41.254 "data_size": 63488 00:11:41.254 }, 00:11:41.254 { 00:11:41.254 "name": null, 00:11:41.254 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:41.254 "is_configured": false, 00:11:41.254 "data_offset": 0, 00:11:41.254 "data_size": 63488 00:11:41.254 }, 00:11:41.254 { 00:11:41.254 "name": "BaseBdev3", 00:11:41.254 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:41.254 "is_configured": true, 00:11:41.254 "data_offset": 2048, 00:11:41.254 "data_size": 63488 00:11:41.254 } 00:11:41.254 ] 00:11:41.254 }' 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.254 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.514 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.514 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.514 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.514 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 [2024-11-26 06:21:25.663518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.773 "name": "Existed_Raid", 00:11:41.773 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:41.773 "strip_size_kb": 64, 00:11:41.773 "state": "configuring", 00:11:41.773 "raid_level": "raid0", 00:11:41.773 "superblock": true, 00:11:41.773 "num_base_bdevs": 3, 00:11:41.773 "num_base_bdevs_discovered": 2, 00:11:41.773 "num_base_bdevs_operational": 3, 00:11:41.773 "base_bdevs_list": [ 00:11:41.773 { 00:11:41.773 "name": null, 00:11:41.773 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:41.773 "is_configured": false, 00:11:41.773 "data_offset": 0, 00:11:41.773 "data_size": 63488 00:11:41.773 }, 00:11:41.773 { 00:11:41.773 "name": "BaseBdev2", 00:11:41.773 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:41.773 "is_configured": true, 00:11:41.773 "data_offset": 2048, 00:11:41.773 "data_size": 63488 00:11:41.773 }, 00:11:41.773 { 00:11:41.773 "name": "BaseBdev3", 00:11:41.773 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:41.773 "is_configured": true, 00:11:41.773 "data_offset": 2048, 00:11:41.773 "data_size": 63488 00:11:41.773 } 00:11:41.773 ] 00:11:41.773 }' 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.773 06:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.032 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:42.032 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.032 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.032 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.032 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.033 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:42.033 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:42.033 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.033 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.033 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.033 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a08c3381-c047-4169-8d83-cc7ca01c86c1 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.291 [2024-11-26 06:21:26.213852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:42.291 [2024-11-26 06:21:26.214366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.291 [2024-11-26 06:21:26.214442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:42.291 [2024-11-26 06:21:26.214814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:42.291 [2024-11-26 06:21:26.215077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.291 [2024-11-26 06:21:26.215143] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:42.291 NewBaseBdev 00:11:42.291 [2024-11-26 06:21:26.215410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.291 [ 00:11:42.291 { 00:11:42.291 "name": "NewBaseBdev", 00:11:42.291 "aliases": [ 00:11:42.291 "a08c3381-c047-4169-8d83-cc7ca01c86c1" 00:11:42.291 ], 00:11:42.291 "product_name": "Malloc disk", 00:11:42.291 "block_size": 512, 00:11:42.291 "num_blocks": 65536, 00:11:42.291 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:42.291 "assigned_rate_limits": { 00:11:42.291 "rw_ios_per_sec": 0, 00:11:42.291 "rw_mbytes_per_sec": 0, 00:11:42.291 "r_mbytes_per_sec": 0, 00:11:42.291 "w_mbytes_per_sec": 0 00:11:42.291 }, 00:11:42.291 "claimed": true, 00:11:42.291 "claim_type": "exclusive_write", 00:11:42.291 "zoned": false, 00:11:42.291 "supported_io_types": { 00:11:42.291 "read": true, 00:11:42.291 "write": true, 00:11:42.291 "unmap": true, 00:11:42.291 "flush": true, 00:11:42.291 "reset": true, 00:11:42.291 "nvme_admin": false, 00:11:42.291 "nvme_io": false, 00:11:42.291 "nvme_io_md": false, 00:11:42.291 "write_zeroes": true, 00:11:42.291 "zcopy": true, 00:11:42.291 "get_zone_info": false, 00:11:42.291 "zone_management": false, 00:11:42.291 "zone_append": false, 00:11:42.291 "compare": false, 00:11:42.291 "compare_and_write": false, 00:11:42.291 "abort": true, 00:11:42.291 "seek_hole": false, 00:11:42.291 "seek_data": false, 00:11:42.291 "copy": true, 00:11:42.291 "nvme_iov_md": false 00:11:42.291 }, 00:11:42.291 "memory_domains": [ 00:11:42.291 { 00:11:42.291 "dma_device_id": "system", 00:11:42.291 "dma_device_type": 1 00:11:42.291 }, 00:11:42.291 { 00:11:42.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.291 "dma_device_type": 2 00:11:42.291 } 00:11:42.291 ], 00:11:42.291 "driver_specific": {} 00:11:42.291 } 00:11:42.291 ] 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.291 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.292 "name": "Existed_Raid", 00:11:42.292 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:42.292 "strip_size_kb": 64, 00:11:42.292 "state": "online", 00:11:42.292 "raid_level": "raid0", 00:11:42.292 "superblock": true, 00:11:42.292 "num_base_bdevs": 3, 00:11:42.292 "num_base_bdevs_discovered": 3, 00:11:42.292 "num_base_bdevs_operational": 3, 00:11:42.292 "base_bdevs_list": [ 00:11:42.292 { 00:11:42.292 "name": "NewBaseBdev", 00:11:42.292 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:42.292 "is_configured": true, 00:11:42.292 "data_offset": 2048, 00:11:42.292 "data_size": 63488 00:11:42.292 }, 00:11:42.292 { 00:11:42.292 "name": "BaseBdev2", 00:11:42.292 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:42.292 "is_configured": true, 00:11:42.292 "data_offset": 2048, 00:11:42.292 "data_size": 63488 00:11:42.292 }, 00:11:42.292 { 00:11:42.292 "name": "BaseBdev3", 00:11:42.292 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:42.292 "is_configured": true, 00:11:42.292 "data_offset": 2048, 00:11:42.292 "data_size": 63488 00:11:42.292 } 00:11:42.292 ] 00:11:42.292 }' 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.292 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.859 [2024-11-26 06:21:26.693477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.859 "name": "Existed_Raid", 00:11:42.859 "aliases": [ 00:11:42.859 "c09dc848-e971-492a-a2a7-38037dfc57dc" 00:11:42.859 ], 00:11:42.859 "product_name": "Raid Volume", 00:11:42.859 "block_size": 512, 00:11:42.859 "num_blocks": 190464, 00:11:42.859 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:42.859 "assigned_rate_limits": { 00:11:42.859 "rw_ios_per_sec": 0, 00:11:42.859 "rw_mbytes_per_sec": 0, 00:11:42.859 "r_mbytes_per_sec": 0, 00:11:42.859 "w_mbytes_per_sec": 0 00:11:42.859 }, 00:11:42.859 "claimed": false, 00:11:42.859 "zoned": false, 00:11:42.859 "supported_io_types": { 00:11:42.859 "read": true, 00:11:42.859 "write": true, 00:11:42.859 "unmap": true, 00:11:42.859 "flush": true, 00:11:42.859 "reset": true, 00:11:42.859 "nvme_admin": false, 00:11:42.859 "nvme_io": false, 00:11:42.859 "nvme_io_md": false, 00:11:42.859 "write_zeroes": true, 00:11:42.859 "zcopy": false, 00:11:42.859 "get_zone_info": false, 00:11:42.859 "zone_management": false, 00:11:42.859 "zone_append": false, 00:11:42.859 "compare": false, 00:11:42.859 "compare_and_write": false, 00:11:42.859 "abort": false, 00:11:42.859 "seek_hole": false, 00:11:42.859 "seek_data": false, 00:11:42.859 "copy": false, 00:11:42.859 "nvme_iov_md": false 00:11:42.859 }, 00:11:42.859 "memory_domains": [ 00:11:42.859 { 00:11:42.859 "dma_device_id": "system", 00:11:42.859 "dma_device_type": 1 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.859 "dma_device_type": 2 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "dma_device_id": "system", 00:11:42.859 "dma_device_type": 1 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.859 "dma_device_type": 2 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "dma_device_id": "system", 00:11:42.859 "dma_device_type": 1 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.859 "dma_device_type": 2 00:11:42.859 } 00:11:42.859 ], 00:11:42.859 "driver_specific": { 00:11:42.859 "raid": { 00:11:42.859 "uuid": "c09dc848-e971-492a-a2a7-38037dfc57dc", 00:11:42.859 "strip_size_kb": 64, 00:11:42.859 "state": "online", 00:11:42.859 "raid_level": "raid0", 00:11:42.859 "superblock": true, 00:11:42.859 "num_base_bdevs": 3, 00:11:42.859 "num_base_bdevs_discovered": 3, 00:11:42.859 "num_base_bdevs_operational": 3, 00:11:42.859 "base_bdevs_list": [ 00:11:42.859 { 00:11:42.859 "name": "NewBaseBdev", 00:11:42.859 "uuid": "a08c3381-c047-4169-8d83-cc7ca01c86c1", 00:11:42.859 "is_configured": true, 00:11:42.859 "data_offset": 2048, 00:11:42.859 "data_size": 63488 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "name": "BaseBdev2", 00:11:42.859 "uuid": "e2dc69f1-6e2e-4b23-82c4-ba5f39c05224", 00:11:42.859 "is_configured": true, 00:11:42.859 "data_offset": 2048, 00:11:42.859 "data_size": 63488 00:11:42.859 }, 00:11:42.859 { 00:11:42.859 "name": "BaseBdev3", 00:11:42.859 "uuid": "76063387-bf17-49ea-9e59-28525c4b2da6", 00:11:42.859 "is_configured": true, 00:11:42.859 "data_offset": 2048, 00:11:42.859 "data_size": 63488 00:11:42.859 } 00:11:42.859 ] 00:11:42.859 } 00:11:42.859 } 00:11:42.859 }' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:42.859 BaseBdev2 00:11:42.859 BaseBdev3' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.859 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.860 [2024-11-26 06:21:26.952690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.860 [2024-11-26 06:21:26.952805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.860 [2024-11-26 06:21:26.952979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.860 [2024-11-26 06:21:26.953100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.860 [2024-11-26 06:21:26.953170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64838 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64838 ']' 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64838 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.860 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64838 00:11:43.185 06:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.185 06:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.185 06:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64838' 00:11:43.185 killing process with pid 64838 00:11:43.185 06:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64838 00:11:43.185 [2024-11-26 06:21:27.002970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.185 06:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64838 00:11:43.445 [2024-11-26 06:21:27.358965] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.822 06:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:44.822 00:11:44.822 real 0m11.047s 00:11:44.822 user 0m17.083s 00:11:44.822 sys 0m2.153s 00:11:44.822 ************************************ 00:11:44.822 END TEST raid_state_function_test_sb 00:11:44.822 ************************************ 00:11:44.822 06:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.822 06:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.822 06:21:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:44.822 06:21:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.822 06:21:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.822 06:21:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.822 ************************************ 00:11:44.822 START TEST raid_superblock_test 00:11:44.822 ************************************ 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65464 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65464 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65464 ']' 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.822 06:21:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.822 [2024-11-26 06:21:28.838837] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:44.822 [2024-11-26 06:21:28.839095] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65464 ] 00:11:45.081 [2024-11-26 06:21:29.019460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.081 [2024-11-26 06:21:29.166872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.338 [2024-11-26 06:21:29.423383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.338 [2024-11-26 06:21:29.423463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.596 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.855 malloc1 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.855 [2024-11-26 06:21:29.758781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.855 [2024-11-26 06:21:29.758903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.855 [2024-11-26 06:21:29.758953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.855 [2024-11-26 06:21:29.758984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.855 [2024-11-26 06:21:29.761685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.855 [2024-11-26 06:21:29.761761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.855 pt1 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.855 malloc2 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.855 [2024-11-26 06:21:29.831544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.855 [2024-11-26 06:21:29.831685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.855 [2024-11-26 06:21:29.831787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.855 [2024-11-26 06:21:29.831841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.855 [2024-11-26 06:21:29.834713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.855 [2024-11-26 06:21:29.834757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.855 pt2 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.855 malloc3 00:11:45.855 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.856 [2024-11-26 06:21:29.911451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.856 [2024-11-26 06:21:29.911567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.856 [2024-11-26 06:21:29.911611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.856 [2024-11-26 06:21:29.911643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.856 [2024-11-26 06:21:29.914314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.856 [2024-11-26 06:21:29.914385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.856 pt3 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.856 [2024-11-26 06:21:29.923483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.856 [2024-11-26 06:21:29.925812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.856 [2024-11-26 06:21:29.925918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.856 [2024-11-26 06:21:29.926147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:45.856 [2024-11-26 06:21:29.926206] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:45.856 [2024-11-26 06:21:29.926519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:45.856 [2024-11-26 06:21:29.926759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:45.856 [2024-11-26 06:21:29.926804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:45.856 [2024-11-26 06:21:29.927043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.856 "name": "raid_bdev1", 00:11:45.856 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:45.856 "strip_size_kb": 64, 00:11:45.856 "state": "online", 00:11:45.856 "raid_level": "raid0", 00:11:45.856 "superblock": true, 00:11:45.856 "num_base_bdevs": 3, 00:11:45.856 "num_base_bdevs_discovered": 3, 00:11:45.856 "num_base_bdevs_operational": 3, 00:11:45.856 "base_bdevs_list": [ 00:11:45.856 { 00:11:45.856 "name": "pt1", 00:11:45.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:45.856 "is_configured": true, 00:11:45.856 "data_offset": 2048, 00:11:45.856 "data_size": 63488 00:11:45.856 }, 00:11:45.856 { 00:11:45.856 "name": "pt2", 00:11:45.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.856 "is_configured": true, 00:11:45.856 "data_offset": 2048, 00:11:45.856 "data_size": 63488 00:11:45.856 }, 00:11:45.856 { 00:11:45.856 "name": "pt3", 00:11:45.856 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.856 "is_configured": true, 00:11:45.856 "data_offset": 2048, 00:11:45.856 "data_size": 63488 00:11:45.856 } 00:11:45.856 ] 00:11:45.856 }' 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.856 06:21:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.423 [2024-11-26 06:21:30.387134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:46.423 "name": "raid_bdev1", 00:11:46.423 "aliases": [ 00:11:46.423 "e30bc400-f4de-46af-985a-938a0da5de62" 00:11:46.423 ], 00:11:46.423 "product_name": "Raid Volume", 00:11:46.423 "block_size": 512, 00:11:46.423 "num_blocks": 190464, 00:11:46.423 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:46.423 "assigned_rate_limits": { 00:11:46.423 "rw_ios_per_sec": 0, 00:11:46.423 "rw_mbytes_per_sec": 0, 00:11:46.423 "r_mbytes_per_sec": 0, 00:11:46.423 "w_mbytes_per_sec": 0 00:11:46.423 }, 00:11:46.423 "claimed": false, 00:11:46.423 "zoned": false, 00:11:46.423 "supported_io_types": { 00:11:46.423 "read": true, 00:11:46.423 "write": true, 00:11:46.423 "unmap": true, 00:11:46.423 "flush": true, 00:11:46.423 "reset": true, 00:11:46.423 "nvme_admin": false, 00:11:46.423 "nvme_io": false, 00:11:46.423 "nvme_io_md": false, 00:11:46.423 "write_zeroes": true, 00:11:46.423 "zcopy": false, 00:11:46.423 "get_zone_info": false, 00:11:46.423 "zone_management": false, 00:11:46.423 "zone_append": false, 00:11:46.423 "compare": false, 00:11:46.423 "compare_and_write": false, 00:11:46.423 "abort": false, 00:11:46.423 "seek_hole": false, 00:11:46.423 "seek_data": false, 00:11:46.423 "copy": false, 00:11:46.423 "nvme_iov_md": false 00:11:46.423 }, 00:11:46.423 "memory_domains": [ 00:11:46.423 { 00:11:46.423 "dma_device_id": "system", 00:11:46.423 "dma_device_type": 1 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.423 "dma_device_type": 2 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "dma_device_id": "system", 00:11:46.423 "dma_device_type": 1 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.423 "dma_device_type": 2 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "dma_device_id": "system", 00:11:46.423 "dma_device_type": 1 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.423 "dma_device_type": 2 00:11:46.423 } 00:11:46.423 ], 00:11:46.423 "driver_specific": { 00:11:46.423 "raid": { 00:11:46.423 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:46.423 "strip_size_kb": 64, 00:11:46.423 "state": "online", 00:11:46.423 "raid_level": "raid0", 00:11:46.423 "superblock": true, 00:11:46.423 "num_base_bdevs": 3, 00:11:46.423 "num_base_bdevs_discovered": 3, 00:11:46.423 "num_base_bdevs_operational": 3, 00:11:46.423 "base_bdevs_list": [ 00:11:46.423 { 00:11:46.423 "name": "pt1", 00:11:46.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.423 "is_configured": true, 00:11:46.423 "data_offset": 2048, 00:11:46.423 "data_size": 63488 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "name": "pt2", 00:11:46.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.423 "is_configured": true, 00:11:46.423 "data_offset": 2048, 00:11:46.423 "data_size": 63488 00:11:46.423 }, 00:11:46.423 { 00:11:46.423 "name": "pt3", 00:11:46.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.423 "is_configured": true, 00:11:46.423 "data_offset": 2048, 00:11:46.423 "data_size": 63488 00:11:46.423 } 00:11:46.423 ] 00:11:46.423 } 00:11:46.423 } 00:11:46.423 }' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:46.423 pt2 00:11:46.423 pt3' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.423 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.682 [2024-11-26 06:21:30.678607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e30bc400-f4de-46af-985a-938a0da5de62 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e30bc400-f4de-46af-985a-938a0da5de62 ']' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.682 [2024-11-26 06:21:30.726152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.682 [2024-11-26 06:21:30.726244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.682 [2024-11-26 06:21:30.726407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.682 [2024-11-26 06:21:30.726533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.682 [2024-11-26 06:21:30.726596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.682 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.683 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.942 [2024-11-26 06:21:30.873979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.942 [2024-11-26 06:21:30.876646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.942 [2024-11-26 06:21:30.876772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:46.942 [2024-11-26 06:21:30.876881] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:46.942 [2024-11-26 06:21:30.877042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:46.942 [2024-11-26 06:21:30.877128] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:46.942 [2024-11-26 06:21:30.877207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.942 [2024-11-26 06:21:30.877260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:46.942 request: 00:11:46.942 { 00:11:46.942 "name": "raid_bdev1", 00:11:46.942 "raid_level": "raid0", 00:11:46.942 "base_bdevs": [ 00:11:46.942 "malloc1", 00:11:46.942 "malloc2", 00:11:46.942 "malloc3" 00:11:46.942 ], 00:11:46.942 "strip_size_kb": 64, 00:11:46.942 "superblock": false, 00:11:46.942 "method": "bdev_raid_create", 00:11:46.942 "req_id": 1 00:11:46.942 } 00:11:46.942 Got JSON-RPC error response 00:11:46.942 response: 00:11:46.942 { 00:11:46.942 "code": -17, 00:11:46.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.942 } 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.942 [2024-11-26 06:21:30.941775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:46.942 [2024-11-26 06:21:30.941917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.942 [2024-11-26 06:21:30.941945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:46.942 [2024-11-26 06:21:30.941955] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.942 [2024-11-26 06:21:30.944623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.942 [2024-11-26 06:21:30.944662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:46.942 [2024-11-26 06:21:30.944770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:46.942 [2024-11-26 06:21:30.944830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:46.942 pt1 00:11:46.942 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.943 "name": "raid_bdev1", 00:11:46.943 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:46.943 "strip_size_kb": 64, 00:11:46.943 "state": "configuring", 00:11:46.943 "raid_level": "raid0", 00:11:46.943 "superblock": true, 00:11:46.943 "num_base_bdevs": 3, 00:11:46.943 "num_base_bdevs_discovered": 1, 00:11:46.943 "num_base_bdevs_operational": 3, 00:11:46.943 "base_bdevs_list": [ 00:11:46.943 { 00:11:46.943 "name": "pt1", 00:11:46.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:46.943 "is_configured": true, 00:11:46.943 "data_offset": 2048, 00:11:46.943 "data_size": 63488 00:11:46.943 }, 00:11:46.943 { 00:11:46.943 "name": null, 00:11:46.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:46.943 "is_configured": false, 00:11:46.943 "data_offset": 2048, 00:11:46.943 "data_size": 63488 00:11:46.943 }, 00:11:46.943 { 00:11:46.943 "name": null, 00:11:46.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:46.943 "is_configured": false, 00:11:46.943 "data_offset": 2048, 00:11:46.943 "data_size": 63488 00:11:46.943 } 00:11:46.943 ] 00:11:46.943 }' 00:11:46.943 06:21:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.943 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.511 [2024-11-26 06:21:31.393086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.511 [2024-11-26 06:21:31.393242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.511 [2024-11-26 06:21:31.393292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:47.511 [2024-11-26 06:21:31.393325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.511 [2024-11-26 06:21:31.393981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.511 [2024-11-26 06:21:31.394087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.511 [2024-11-26 06:21:31.394254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.511 [2024-11-26 06:21:31.394315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.511 pt2 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.511 [2024-11-26 06:21:31.405140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.511 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.511 "name": "raid_bdev1", 00:11:47.511 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:47.511 "strip_size_kb": 64, 00:11:47.511 "state": "configuring", 00:11:47.511 "raid_level": "raid0", 00:11:47.511 "superblock": true, 00:11:47.511 "num_base_bdevs": 3, 00:11:47.511 "num_base_bdevs_discovered": 1, 00:11:47.511 "num_base_bdevs_operational": 3, 00:11:47.511 "base_bdevs_list": [ 00:11:47.512 { 00:11:47.512 "name": "pt1", 00:11:47.512 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.512 "is_configured": true, 00:11:47.512 "data_offset": 2048, 00:11:47.512 "data_size": 63488 00:11:47.512 }, 00:11:47.512 { 00:11:47.512 "name": null, 00:11:47.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.512 "is_configured": false, 00:11:47.512 "data_offset": 0, 00:11:47.512 "data_size": 63488 00:11:47.512 }, 00:11:47.512 { 00:11:47.512 "name": null, 00:11:47.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.512 "is_configured": false, 00:11:47.512 "data_offset": 2048, 00:11:47.512 "data_size": 63488 00:11:47.512 } 00:11:47.512 ] 00:11:47.512 }' 00:11:47.512 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.512 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.771 [2024-11-26 06:21:31.844318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.771 [2024-11-26 06:21:31.844474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.771 [2024-11-26 06:21:31.844542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:47.771 [2024-11-26 06:21:31.844590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.771 [2024-11-26 06:21:31.845271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.771 [2024-11-26 06:21:31.845348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.771 [2024-11-26 06:21:31.845521] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:47.771 [2024-11-26 06:21:31.845580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.771 pt2 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.771 [2024-11-26 06:21:31.852256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.771 [2024-11-26 06:21:31.852311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.771 [2024-11-26 06:21:31.852329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:47.771 [2024-11-26 06:21:31.852341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.771 [2024-11-26 06:21:31.852799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.771 [2024-11-26 06:21:31.852822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.771 [2024-11-26 06:21:31.852894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:47.771 [2024-11-26 06:21:31.852918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.771 [2024-11-26 06:21:31.853057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.771 [2024-11-26 06:21:31.853086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:47.771 [2024-11-26 06:21:31.853381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:47.771 [2024-11-26 06:21:31.853562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.771 [2024-11-26 06:21:31.853572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:47.771 [2024-11-26 06:21:31.853730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.771 pt3 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.771 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.029 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.029 "name": "raid_bdev1", 00:11:48.029 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:48.029 "strip_size_kb": 64, 00:11:48.029 "state": "online", 00:11:48.029 "raid_level": "raid0", 00:11:48.029 "superblock": true, 00:11:48.029 "num_base_bdevs": 3, 00:11:48.029 "num_base_bdevs_discovered": 3, 00:11:48.029 "num_base_bdevs_operational": 3, 00:11:48.029 "base_bdevs_list": [ 00:11:48.029 { 00:11:48.029 "name": "pt1", 00:11:48.030 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.030 "is_configured": true, 00:11:48.030 "data_offset": 2048, 00:11:48.030 "data_size": 63488 00:11:48.030 }, 00:11:48.030 { 00:11:48.030 "name": "pt2", 00:11:48.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.030 "is_configured": true, 00:11:48.030 "data_offset": 2048, 00:11:48.030 "data_size": 63488 00:11:48.030 }, 00:11:48.030 { 00:11:48.030 "name": "pt3", 00:11:48.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.030 "is_configured": true, 00:11:48.030 "data_offset": 2048, 00:11:48.030 "data_size": 63488 00:11:48.030 } 00:11:48.030 ] 00:11:48.030 }' 00:11:48.030 06:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.030 06:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.289 [2024-11-26 06:21:32.323932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.289 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.289 "name": "raid_bdev1", 00:11:48.289 "aliases": [ 00:11:48.289 "e30bc400-f4de-46af-985a-938a0da5de62" 00:11:48.289 ], 00:11:48.289 "product_name": "Raid Volume", 00:11:48.289 "block_size": 512, 00:11:48.289 "num_blocks": 190464, 00:11:48.289 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:48.289 "assigned_rate_limits": { 00:11:48.289 "rw_ios_per_sec": 0, 00:11:48.289 "rw_mbytes_per_sec": 0, 00:11:48.289 "r_mbytes_per_sec": 0, 00:11:48.289 "w_mbytes_per_sec": 0 00:11:48.289 }, 00:11:48.289 "claimed": false, 00:11:48.289 "zoned": false, 00:11:48.289 "supported_io_types": { 00:11:48.289 "read": true, 00:11:48.289 "write": true, 00:11:48.289 "unmap": true, 00:11:48.289 "flush": true, 00:11:48.289 "reset": true, 00:11:48.289 "nvme_admin": false, 00:11:48.289 "nvme_io": false, 00:11:48.289 "nvme_io_md": false, 00:11:48.289 "write_zeroes": true, 00:11:48.289 "zcopy": false, 00:11:48.289 "get_zone_info": false, 00:11:48.289 "zone_management": false, 00:11:48.289 "zone_append": false, 00:11:48.290 "compare": false, 00:11:48.290 "compare_and_write": false, 00:11:48.290 "abort": false, 00:11:48.290 "seek_hole": false, 00:11:48.290 "seek_data": false, 00:11:48.290 "copy": false, 00:11:48.290 "nvme_iov_md": false 00:11:48.290 }, 00:11:48.290 "memory_domains": [ 00:11:48.290 { 00:11:48.290 "dma_device_id": "system", 00:11:48.290 "dma_device_type": 1 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.290 "dma_device_type": 2 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "dma_device_id": "system", 00:11:48.290 "dma_device_type": 1 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.290 "dma_device_type": 2 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "dma_device_id": "system", 00:11:48.290 "dma_device_type": 1 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.290 "dma_device_type": 2 00:11:48.290 } 00:11:48.290 ], 00:11:48.290 "driver_specific": { 00:11:48.290 "raid": { 00:11:48.290 "uuid": "e30bc400-f4de-46af-985a-938a0da5de62", 00:11:48.290 "strip_size_kb": 64, 00:11:48.290 "state": "online", 00:11:48.290 "raid_level": "raid0", 00:11:48.290 "superblock": true, 00:11:48.290 "num_base_bdevs": 3, 00:11:48.290 "num_base_bdevs_discovered": 3, 00:11:48.290 "num_base_bdevs_operational": 3, 00:11:48.290 "base_bdevs_list": [ 00:11:48.290 { 00:11:48.290 "name": "pt1", 00:11:48.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.290 "is_configured": true, 00:11:48.290 "data_offset": 2048, 00:11:48.290 "data_size": 63488 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "name": "pt2", 00:11:48.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.290 "is_configured": true, 00:11:48.290 "data_offset": 2048, 00:11:48.290 "data_size": 63488 00:11:48.290 }, 00:11:48.290 { 00:11:48.290 "name": "pt3", 00:11:48.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.290 "is_configured": true, 00:11:48.290 "data_offset": 2048, 00:11:48.290 "data_size": 63488 00:11:48.290 } 00:11:48.290 ] 00:11:48.290 } 00:11:48.290 } 00:11:48.290 }' 00:11:48.290 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.290 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.290 pt2 00:11:48.290 pt3' 00:11:48.290 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.549 [2024-11-26 06:21:32.599396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.549 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e30bc400-f4de-46af-985a-938a0da5de62 '!=' e30bc400-f4de-46af-985a-938a0da5de62 ']' 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65464 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65464 ']' 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65464 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65464 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.550 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65464' 00:11:48.550 killing process with pid 65464 00:11:48.808 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65464 00:11:48.808 [2024-11-26 06:21:32.681244] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.808 06:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65464 00:11:48.808 [2024-11-26 06:21:32.681452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.808 [2024-11-26 06:21:32.681540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.808 [2024-11-26 06:21:32.681556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:49.067 [2024-11-26 06:21:33.033637] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.452 06:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:50.452 00:11:50.452 real 0m5.590s 00:11:50.452 user 0m7.764s 00:11:50.452 sys 0m1.095s 00:11:50.452 06:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.452 06:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.452 ************************************ 00:11:50.452 END TEST raid_superblock_test 00:11:50.452 ************************************ 00:11:50.452 06:21:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:50.452 06:21:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.452 06:21:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.452 06:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.452 ************************************ 00:11:50.452 START TEST raid_read_error_test 00:11:50.452 ************************************ 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.452 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.osunIfpPXX 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65723 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65723 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65723 ']' 00:11:50.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.453 06:21:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.453 [2024-11-26 06:21:34.508897] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:50.453 [2024-11-26 06:21:34.509746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65723 ] 00:11:50.711 [2024-11-26 06:21:34.693287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.970 [2024-11-26 06:21:34.858393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.229 [2024-11-26 06:21:35.104721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.229 [2024-11-26 06:21:35.104945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 BaseBdev1_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 true 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 [2024-11-26 06:21:35.457258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.489 [2024-11-26 06:21:35.457390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.489 [2024-11-26 06:21:35.457441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.489 [2024-11-26 06:21:35.457466] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.489 [2024-11-26 06:21:35.461344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.489 [2024-11-26 06:21:35.461422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.489 BaseBdev1 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 BaseBdev2_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 true 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 [2024-11-26 06:21:35.535246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.489 [2024-11-26 06:21:35.535335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.489 [2024-11-26 06:21:35.535361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.489 [2024-11-26 06:21:35.535375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.489 [2024-11-26 06:21:35.538287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.489 [2024-11-26 06:21:35.538334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.489 BaseBdev2 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 BaseBdev3_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 true 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.489 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.489 [2024-11-26 06:21:35.618737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.489 [2024-11-26 06:21:35.618851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.489 [2024-11-26 06:21:35.618888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:51.489 [2024-11-26 06:21:35.618919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.781 [2024-11-26 06:21:35.621454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.781 [2024-11-26 06:21:35.621527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.781 BaseBdev3 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.781 [2024-11-26 06:21:35.630794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.781 [2024-11-26 06:21:35.632924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.781 [2024-11-26 06:21:35.633005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.781 [2024-11-26 06:21:35.633270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.781 [2024-11-26 06:21:35.633319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:51.781 [2024-11-26 06:21:35.633622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:51.781 [2024-11-26 06:21:35.633843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.781 [2024-11-26 06:21:35.633891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:51.781 [2024-11-26 06:21:35.634162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.781 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.782 "name": "raid_bdev1", 00:11:51.782 "uuid": "61858a3f-7bc1-4496-ab0a-bc87859de14b", 00:11:51.782 "strip_size_kb": 64, 00:11:51.782 "state": "online", 00:11:51.782 "raid_level": "raid0", 00:11:51.782 "superblock": true, 00:11:51.782 "num_base_bdevs": 3, 00:11:51.782 "num_base_bdevs_discovered": 3, 00:11:51.782 "num_base_bdevs_operational": 3, 00:11:51.782 "base_bdevs_list": [ 00:11:51.782 { 00:11:51.782 "name": "BaseBdev1", 00:11:51.782 "uuid": "27133522-0fb9-5656-b6f2-e0db18c45312", 00:11:51.782 "is_configured": true, 00:11:51.782 "data_offset": 2048, 00:11:51.782 "data_size": 63488 00:11:51.782 }, 00:11:51.782 { 00:11:51.782 "name": "BaseBdev2", 00:11:51.782 "uuid": "1d371106-9c69-5ba3-ba0a-85ef732c3c5c", 00:11:51.782 "is_configured": true, 00:11:51.782 "data_offset": 2048, 00:11:51.782 "data_size": 63488 00:11:51.782 }, 00:11:51.782 { 00:11:51.782 "name": "BaseBdev3", 00:11:51.782 "uuid": "57200467-3087-50aa-bd7f-111df7fbaf43", 00:11:51.782 "is_configured": true, 00:11:51.782 "data_offset": 2048, 00:11:51.782 "data_size": 63488 00:11:51.782 } 00:11:51.782 ] 00:11:51.782 }' 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.782 06:21:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.040 06:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.040 06:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.299 [2024-11-26 06:21:36.247432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.235 "name": "raid_bdev1", 00:11:53.235 "uuid": "61858a3f-7bc1-4496-ab0a-bc87859de14b", 00:11:53.235 "strip_size_kb": 64, 00:11:53.235 "state": "online", 00:11:53.235 "raid_level": "raid0", 00:11:53.235 "superblock": true, 00:11:53.235 "num_base_bdevs": 3, 00:11:53.235 "num_base_bdevs_discovered": 3, 00:11:53.235 "num_base_bdevs_operational": 3, 00:11:53.235 "base_bdevs_list": [ 00:11:53.235 { 00:11:53.235 "name": "BaseBdev1", 00:11:53.235 "uuid": "27133522-0fb9-5656-b6f2-e0db18c45312", 00:11:53.235 "is_configured": true, 00:11:53.235 "data_offset": 2048, 00:11:53.235 "data_size": 63488 00:11:53.235 }, 00:11:53.235 { 00:11:53.235 "name": "BaseBdev2", 00:11:53.235 "uuid": "1d371106-9c69-5ba3-ba0a-85ef732c3c5c", 00:11:53.235 "is_configured": true, 00:11:53.235 "data_offset": 2048, 00:11:53.235 "data_size": 63488 00:11:53.235 }, 00:11:53.235 { 00:11:53.235 "name": "BaseBdev3", 00:11:53.235 "uuid": "57200467-3087-50aa-bd7f-111df7fbaf43", 00:11:53.235 "is_configured": true, 00:11:53.235 "data_offset": 2048, 00:11:53.235 "data_size": 63488 00:11:53.235 } 00:11:53.235 ] 00:11:53.235 }' 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.235 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.493 [2024-11-26 06:21:37.585535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.493 [2024-11-26 06:21:37.585639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.493 [2024-11-26 06:21:37.588856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.493 [2024-11-26 06:21:37.588999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.493 [2024-11-26 06:21:37.589112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.493 [2024-11-26 06:21:37.589181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:53.493 { 00:11:53.493 "results": [ 00:11:53.493 { 00:11:53.493 "job": "raid_bdev1", 00:11:53.493 "core_mask": "0x1", 00:11:53.493 "workload": "randrw", 00:11:53.493 "percentage": 50, 00:11:53.493 "status": "finished", 00:11:53.493 "queue_depth": 1, 00:11:53.493 "io_size": 131072, 00:11:53.493 "runtime": 1.338191, 00:11:53.493 "iops": 12424.235404363055, 00:11:53.493 "mibps": 1553.0294255453819, 00:11:53.493 "io_failed": 1, 00:11:53.493 "io_timeout": 0, 00:11:53.493 "avg_latency_us": 113.3583865670164, 00:11:53.493 "min_latency_us": 26.382532751091702, 00:11:53.493 "max_latency_us": 1674.172925764192 00:11:53.493 } 00:11:53.493 ], 00:11:53.493 "core_count": 1 00:11:53.493 } 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65723 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65723 ']' 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65723 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.493 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65723 00:11:53.753 killing process with pid 65723 00:11:53.754 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.754 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.754 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65723' 00:11:53.754 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65723 00:11:53.754 [2024-11-26 06:21:37.635955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.754 06:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65723 00:11:54.012 [2024-11-26 06:21:37.921201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.osunIfpPXX 00:11:55.393 ************************************ 00:11:55.393 END TEST raid_read_error_test 00:11:55.393 ************************************ 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:55.393 00:11:55.393 real 0m4.888s 00:11:55.393 user 0m5.714s 00:11:55.393 sys 0m0.685s 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.393 06:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.393 06:21:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:55.393 06:21:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.393 06:21:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.393 06:21:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.393 ************************************ 00:11:55.393 START TEST raid_write_error_test 00:11:55.393 ************************************ 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.R75rz3uUr1 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65868 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65868 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65868 ']' 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.393 06:21:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.393 [2024-11-26 06:21:39.458402] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:11:55.393 [2024-11-26 06:21:39.458669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65868 ] 00:11:55.652 [2024-11-26 06:21:39.622853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.652 [2024-11-26 06:21:39.765580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.911 [2024-11-26 06:21:40.012850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.911 [2024-11-26 06:21:40.013045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 BaseBdev1_malloc 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 true 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.479 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.479 [2024-11-26 06:21:40.431163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:56.479 [2024-11-26 06:21:40.431228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.479 [2024-11-26 06:21:40.431251] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:56.479 [2024-11-26 06:21:40.431264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.479 [2024-11-26 06:21:40.433948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.480 [2024-11-26 06:21:40.433992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.480 BaseBdev1 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 BaseBdev2_malloc 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 true 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 [2024-11-26 06:21:40.509387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:56.480 [2024-11-26 06:21:40.509475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.480 [2024-11-26 06:21:40.509501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:56.480 [2024-11-26 06:21:40.509514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.480 [2024-11-26 06:21:40.512380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.480 [2024-11-26 06:21:40.512500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.480 BaseBdev2 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 BaseBdev3_malloc 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 true 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 [2024-11-26 06:21:40.594820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:56.480 [2024-11-26 06:21:40.594883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.480 [2024-11-26 06:21:40.594903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:56.480 [2024-11-26 06:21:40.594914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.480 [2024-11-26 06:21:40.597413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.480 [2024-11-26 06:21:40.597448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:56.480 BaseBdev3 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.480 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.480 [2024-11-26 06:21:40.606962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.480 [2024-11-26 06:21:40.609290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.480 [2024-11-26 06:21:40.609386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.480 [2024-11-26 06:21:40.609623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:56.480 [2024-11-26 06:21:40.609638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:56.480 [2024-11-26 06:21:40.609965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:56.480 [2024-11-26 06:21:40.610183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:56.480 [2024-11-26 06:21:40.610201] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:56.480 [2024-11-26 06:21:40.610427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.738 "name": "raid_bdev1", 00:11:56.738 "uuid": "e9085e9f-9178-403c-a7e8-6391b9935405", 00:11:56.738 "strip_size_kb": 64, 00:11:56.738 "state": "online", 00:11:56.738 "raid_level": "raid0", 00:11:56.738 "superblock": true, 00:11:56.738 "num_base_bdevs": 3, 00:11:56.738 "num_base_bdevs_discovered": 3, 00:11:56.738 "num_base_bdevs_operational": 3, 00:11:56.738 "base_bdevs_list": [ 00:11:56.738 { 00:11:56.738 "name": "BaseBdev1", 00:11:56.738 "uuid": "445cf7a2-adf6-5b39-a201-e49731be6edb", 00:11:56.738 "is_configured": true, 00:11:56.738 "data_offset": 2048, 00:11:56.738 "data_size": 63488 00:11:56.738 }, 00:11:56.738 { 00:11:56.738 "name": "BaseBdev2", 00:11:56.738 "uuid": "1d9bd4bb-856e-5c3d-bc49-d3906683e737", 00:11:56.738 "is_configured": true, 00:11:56.738 "data_offset": 2048, 00:11:56.738 "data_size": 63488 00:11:56.738 }, 00:11:56.738 { 00:11:56.738 "name": "BaseBdev3", 00:11:56.738 "uuid": "1f0568d6-b091-5ebe-befa-6848683e46c9", 00:11:56.738 "is_configured": true, 00:11:56.738 "data_offset": 2048, 00:11:56.738 "data_size": 63488 00:11:56.738 } 00:11:56.738 ] 00:11:56.738 }' 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.738 06:21:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.997 06:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:56.997 06:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:57.256 [2024-11-26 06:21:41.167753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.193 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.193 "name": "raid_bdev1", 00:11:58.193 "uuid": "e9085e9f-9178-403c-a7e8-6391b9935405", 00:11:58.193 "strip_size_kb": 64, 00:11:58.194 "state": "online", 00:11:58.194 "raid_level": "raid0", 00:11:58.194 "superblock": true, 00:11:58.194 "num_base_bdevs": 3, 00:11:58.194 "num_base_bdevs_discovered": 3, 00:11:58.194 "num_base_bdevs_operational": 3, 00:11:58.194 "base_bdevs_list": [ 00:11:58.194 { 00:11:58.194 "name": "BaseBdev1", 00:11:58.194 "uuid": "445cf7a2-adf6-5b39-a201-e49731be6edb", 00:11:58.194 "is_configured": true, 00:11:58.194 "data_offset": 2048, 00:11:58.194 "data_size": 63488 00:11:58.194 }, 00:11:58.194 { 00:11:58.194 "name": "BaseBdev2", 00:11:58.194 "uuid": "1d9bd4bb-856e-5c3d-bc49-d3906683e737", 00:11:58.194 "is_configured": true, 00:11:58.194 "data_offset": 2048, 00:11:58.194 "data_size": 63488 00:11:58.194 }, 00:11:58.194 { 00:11:58.194 "name": "BaseBdev3", 00:11:58.194 "uuid": "1f0568d6-b091-5ebe-befa-6848683e46c9", 00:11:58.194 "is_configured": true, 00:11:58.194 "data_offset": 2048, 00:11:58.194 "data_size": 63488 00:11:58.194 } 00:11:58.194 ] 00:11:58.194 }' 00:11:58.194 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.194 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.452 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.452 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.452 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.452 [2024-11-26 06:21:42.550037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.452 [2024-11-26 06:21:42.550087] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.452 [2024-11-26 06:21:42.552860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.452 [2024-11-26 06:21:42.552962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.452 [2024-11-26 06:21:42.553025] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.452 [2024-11-26 06:21:42.553036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:58.453 { 00:11:58.453 "results": [ 00:11:58.453 { 00:11:58.453 "job": "raid_bdev1", 00:11:58.453 "core_mask": "0x1", 00:11:58.453 "workload": "randrw", 00:11:58.453 "percentage": 50, 00:11:58.453 "status": "finished", 00:11:58.453 "queue_depth": 1, 00:11:58.453 "io_size": 131072, 00:11:58.453 "runtime": 1.382144, 00:11:58.453 "iops": 12386.553065382479, 00:11:58.453 "mibps": 1548.3191331728099, 00:11:58.453 "io_failed": 1, 00:11:58.453 "io_timeout": 0, 00:11:58.453 "avg_latency_us": 113.6551953230908, 00:11:58.453 "min_latency_us": 22.358078602620086, 00:11:58.453 "max_latency_us": 1652.709170305677 00:11:58.453 } 00:11:58.453 ], 00:11:58.453 "core_count": 1 00:11:58.453 } 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65868 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65868 ']' 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65868 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.453 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65868 00:11:58.711 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.711 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.711 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65868' 00:11:58.711 killing process with pid 65868 00:11:58.711 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65868 00:11:58.711 [2024-11-26 06:21:42.598501] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.711 06:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65868 00:11:58.970 [2024-11-26 06:21:42.854683] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.R75rz3uUr1 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:00.347 00:12:00.347 real 0m4.889s 00:12:00.347 user 0m5.708s 00:12:00.347 sys 0m0.729s 00:12:00.347 ************************************ 00:12:00.347 END TEST raid_write_error_test 00:12:00.347 ************************************ 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.347 06:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.347 06:21:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:00.347 06:21:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:00.347 06:21:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.347 06:21:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.347 06:21:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.347 ************************************ 00:12:00.347 START TEST raid_state_function_test 00:12:00.347 ************************************ 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66012 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.347 Process raid pid: 66012 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66012' 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66012 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66012 ']' 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.347 06:21:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.347 [2024-11-26 06:21:44.419422] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:00.347 [2024-11-26 06:21:44.419698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.605 [2024-11-26 06:21:44.603971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.863 [2024-11-26 06:21:44.752704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.121 [2024-11-26 06:21:45.002782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.121 [2024-11-26 06:21:45.002838] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.380 [2024-11-26 06:21:45.295515] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.380 [2024-11-26 06:21:45.295578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.380 [2024-11-26 06:21:45.295590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.380 [2024-11-26 06:21:45.295600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.380 [2024-11-26 06:21:45.295608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.380 [2024-11-26 06:21:45.295617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.380 "name": "Existed_Raid", 00:12:01.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.380 "strip_size_kb": 64, 00:12:01.380 "state": "configuring", 00:12:01.380 "raid_level": "concat", 00:12:01.380 "superblock": false, 00:12:01.380 "num_base_bdevs": 3, 00:12:01.380 "num_base_bdevs_discovered": 0, 00:12:01.380 "num_base_bdevs_operational": 3, 00:12:01.380 "base_bdevs_list": [ 00:12:01.380 { 00:12:01.380 "name": "BaseBdev1", 00:12:01.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.380 "is_configured": false, 00:12:01.380 "data_offset": 0, 00:12:01.380 "data_size": 0 00:12:01.380 }, 00:12:01.380 { 00:12:01.380 "name": "BaseBdev2", 00:12:01.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.380 "is_configured": false, 00:12:01.380 "data_offset": 0, 00:12:01.380 "data_size": 0 00:12:01.380 }, 00:12:01.380 { 00:12:01.380 "name": "BaseBdev3", 00:12:01.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.380 "is_configured": false, 00:12:01.380 "data_offset": 0, 00:12:01.380 "data_size": 0 00:12:01.380 } 00:12:01.380 ] 00:12:01.380 }' 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.380 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.639 [2024-11-26 06:21:45.718770] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.639 [2024-11-26 06:21:45.718888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.639 [2024-11-26 06:21:45.726741] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.639 [2024-11-26 06:21:45.726801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.639 [2024-11-26 06:21:45.726812] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.639 [2024-11-26 06:21:45.726823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.639 [2024-11-26 06:21:45.726830] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.639 [2024-11-26 06:21:45.726842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.639 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.898 [2024-11-26 06:21:45.785844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.898 BaseBdev1 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.898 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.898 [ 00:12:01.898 { 00:12:01.898 "name": "BaseBdev1", 00:12:01.898 "aliases": [ 00:12:01.898 "e79e7120-4216-40a9-a82d-2412bc7d4166" 00:12:01.898 ], 00:12:01.898 "product_name": "Malloc disk", 00:12:01.898 "block_size": 512, 00:12:01.898 "num_blocks": 65536, 00:12:01.898 "uuid": "e79e7120-4216-40a9-a82d-2412bc7d4166", 00:12:01.898 "assigned_rate_limits": { 00:12:01.898 "rw_ios_per_sec": 0, 00:12:01.898 "rw_mbytes_per_sec": 0, 00:12:01.898 "r_mbytes_per_sec": 0, 00:12:01.898 "w_mbytes_per_sec": 0 00:12:01.898 }, 00:12:01.898 "claimed": true, 00:12:01.898 "claim_type": "exclusive_write", 00:12:01.898 "zoned": false, 00:12:01.898 "supported_io_types": { 00:12:01.898 "read": true, 00:12:01.898 "write": true, 00:12:01.898 "unmap": true, 00:12:01.898 "flush": true, 00:12:01.898 "reset": true, 00:12:01.898 "nvme_admin": false, 00:12:01.898 "nvme_io": false, 00:12:01.898 "nvme_io_md": false, 00:12:01.898 "write_zeroes": true, 00:12:01.899 "zcopy": true, 00:12:01.899 "get_zone_info": false, 00:12:01.899 "zone_management": false, 00:12:01.899 "zone_append": false, 00:12:01.899 "compare": false, 00:12:01.899 "compare_and_write": false, 00:12:01.899 "abort": true, 00:12:01.899 "seek_hole": false, 00:12:01.899 "seek_data": false, 00:12:01.899 "copy": true, 00:12:01.899 "nvme_iov_md": false 00:12:01.899 }, 00:12:01.899 "memory_domains": [ 00:12:01.899 { 00:12:01.899 "dma_device_id": "system", 00:12:01.899 "dma_device_type": 1 00:12:01.899 }, 00:12:01.899 { 00:12:01.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.899 "dma_device_type": 2 00:12:01.899 } 00:12:01.899 ], 00:12:01.899 "driver_specific": {} 00:12:01.899 } 00:12:01.899 ] 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.899 "name": "Existed_Raid", 00:12:01.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.899 "strip_size_kb": 64, 00:12:01.899 "state": "configuring", 00:12:01.899 "raid_level": "concat", 00:12:01.899 "superblock": false, 00:12:01.899 "num_base_bdevs": 3, 00:12:01.899 "num_base_bdevs_discovered": 1, 00:12:01.899 "num_base_bdevs_operational": 3, 00:12:01.899 "base_bdevs_list": [ 00:12:01.899 { 00:12:01.899 "name": "BaseBdev1", 00:12:01.899 "uuid": "e79e7120-4216-40a9-a82d-2412bc7d4166", 00:12:01.899 "is_configured": true, 00:12:01.899 "data_offset": 0, 00:12:01.899 "data_size": 65536 00:12:01.899 }, 00:12:01.899 { 00:12:01.899 "name": "BaseBdev2", 00:12:01.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.899 "is_configured": false, 00:12:01.899 "data_offset": 0, 00:12:01.899 "data_size": 0 00:12:01.899 }, 00:12:01.899 { 00:12:01.899 "name": "BaseBdev3", 00:12:01.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.899 "is_configured": false, 00:12:01.899 "data_offset": 0, 00:12:01.899 "data_size": 0 00:12:01.899 } 00:12:01.899 ] 00:12:01.899 }' 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.899 06:21:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.158 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.158 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.158 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 [2024-11-26 06:21:46.293065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.417 [2024-11-26 06:21:46.293228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 [2024-11-26 06:21:46.305136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.417 [2024-11-26 06:21:46.307390] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.417 [2024-11-26 06:21:46.307502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.417 [2024-11-26 06:21:46.307542] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.417 [2024-11-26 06:21:46.307574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.417 "name": "Existed_Raid", 00:12:02.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.417 "strip_size_kb": 64, 00:12:02.417 "state": "configuring", 00:12:02.417 "raid_level": "concat", 00:12:02.417 "superblock": false, 00:12:02.417 "num_base_bdevs": 3, 00:12:02.417 "num_base_bdevs_discovered": 1, 00:12:02.417 "num_base_bdevs_operational": 3, 00:12:02.417 "base_bdevs_list": [ 00:12:02.417 { 00:12:02.417 "name": "BaseBdev1", 00:12:02.417 "uuid": "e79e7120-4216-40a9-a82d-2412bc7d4166", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 0, 00:12:02.417 "data_size": 65536 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "name": "BaseBdev2", 00:12:02.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.417 "is_configured": false, 00:12:02.417 "data_offset": 0, 00:12:02.417 "data_size": 0 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "name": "BaseBdev3", 00:12:02.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.417 "is_configured": false, 00:12:02.417 "data_offset": 0, 00:12:02.417 "data_size": 0 00:12:02.417 } 00:12:02.417 ] 00:12:02.417 }' 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.417 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.676 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.676 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.676 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.935 [2024-11-26 06:21:46.839879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.935 BaseBdev2 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.935 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.936 [ 00:12:02.936 { 00:12:02.936 "name": "BaseBdev2", 00:12:02.936 "aliases": [ 00:12:02.936 "87a0d534-bbbe-4f36-9229-1f243621138f" 00:12:02.936 ], 00:12:02.936 "product_name": "Malloc disk", 00:12:02.936 "block_size": 512, 00:12:02.936 "num_blocks": 65536, 00:12:02.936 "uuid": "87a0d534-bbbe-4f36-9229-1f243621138f", 00:12:02.936 "assigned_rate_limits": { 00:12:02.936 "rw_ios_per_sec": 0, 00:12:02.936 "rw_mbytes_per_sec": 0, 00:12:02.936 "r_mbytes_per_sec": 0, 00:12:02.936 "w_mbytes_per_sec": 0 00:12:02.936 }, 00:12:02.936 "claimed": true, 00:12:02.936 "claim_type": "exclusive_write", 00:12:02.936 "zoned": false, 00:12:02.936 "supported_io_types": { 00:12:02.936 "read": true, 00:12:02.936 "write": true, 00:12:02.936 "unmap": true, 00:12:02.936 "flush": true, 00:12:02.936 "reset": true, 00:12:02.936 "nvme_admin": false, 00:12:02.936 "nvme_io": false, 00:12:02.936 "nvme_io_md": false, 00:12:02.936 "write_zeroes": true, 00:12:02.936 "zcopy": true, 00:12:02.936 "get_zone_info": false, 00:12:02.936 "zone_management": false, 00:12:02.936 "zone_append": false, 00:12:02.936 "compare": false, 00:12:02.936 "compare_and_write": false, 00:12:02.936 "abort": true, 00:12:02.936 "seek_hole": false, 00:12:02.936 "seek_data": false, 00:12:02.936 "copy": true, 00:12:02.936 "nvme_iov_md": false 00:12:02.936 }, 00:12:02.936 "memory_domains": [ 00:12:02.936 { 00:12:02.936 "dma_device_id": "system", 00:12:02.936 "dma_device_type": 1 00:12:02.936 }, 00:12:02.936 { 00:12:02.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.936 "dma_device_type": 2 00:12:02.936 } 00:12:02.936 ], 00:12:02.936 "driver_specific": {} 00:12:02.936 } 00:12:02.936 ] 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.936 "name": "Existed_Raid", 00:12:02.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.936 "strip_size_kb": 64, 00:12:02.936 "state": "configuring", 00:12:02.936 "raid_level": "concat", 00:12:02.936 "superblock": false, 00:12:02.936 "num_base_bdevs": 3, 00:12:02.936 "num_base_bdevs_discovered": 2, 00:12:02.936 "num_base_bdevs_operational": 3, 00:12:02.936 "base_bdevs_list": [ 00:12:02.936 { 00:12:02.936 "name": "BaseBdev1", 00:12:02.936 "uuid": "e79e7120-4216-40a9-a82d-2412bc7d4166", 00:12:02.936 "is_configured": true, 00:12:02.936 "data_offset": 0, 00:12:02.936 "data_size": 65536 00:12:02.936 }, 00:12:02.936 { 00:12:02.936 "name": "BaseBdev2", 00:12:02.936 "uuid": "87a0d534-bbbe-4f36-9229-1f243621138f", 00:12:02.936 "is_configured": true, 00:12:02.936 "data_offset": 0, 00:12:02.936 "data_size": 65536 00:12:02.936 }, 00:12:02.936 { 00:12:02.936 "name": "BaseBdev3", 00:12:02.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.936 "is_configured": false, 00:12:02.936 "data_offset": 0, 00:12:02.936 "data_size": 0 00:12:02.936 } 00:12:02.936 ] 00:12:02.936 }' 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.936 06:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.502 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.503 [2024-11-26 06:21:47.387458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.503 [2024-11-26 06:21:47.387638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.503 [2024-11-26 06:21:47.387678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:03.503 [2024-11-26 06:21:47.388115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:03.503 [2024-11-26 06:21:47.388383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.503 [2024-11-26 06:21:47.388430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.503 [2024-11-26 06:21:47.388813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.503 BaseBdev3 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.503 [ 00:12:03.503 { 00:12:03.503 "name": "BaseBdev3", 00:12:03.503 "aliases": [ 00:12:03.503 "583488c8-4e32-4f96-8dfc-286545cf6ca3" 00:12:03.503 ], 00:12:03.503 "product_name": "Malloc disk", 00:12:03.503 "block_size": 512, 00:12:03.503 "num_blocks": 65536, 00:12:03.503 "uuid": "583488c8-4e32-4f96-8dfc-286545cf6ca3", 00:12:03.503 "assigned_rate_limits": { 00:12:03.503 "rw_ios_per_sec": 0, 00:12:03.503 "rw_mbytes_per_sec": 0, 00:12:03.503 "r_mbytes_per_sec": 0, 00:12:03.503 "w_mbytes_per_sec": 0 00:12:03.503 }, 00:12:03.503 "claimed": true, 00:12:03.503 "claim_type": "exclusive_write", 00:12:03.503 "zoned": false, 00:12:03.503 "supported_io_types": { 00:12:03.503 "read": true, 00:12:03.503 "write": true, 00:12:03.503 "unmap": true, 00:12:03.503 "flush": true, 00:12:03.503 "reset": true, 00:12:03.503 "nvme_admin": false, 00:12:03.503 "nvme_io": false, 00:12:03.503 "nvme_io_md": false, 00:12:03.503 "write_zeroes": true, 00:12:03.503 "zcopy": true, 00:12:03.503 "get_zone_info": false, 00:12:03.503 "zone_management": false, 00:12:03.503 "zone_append": false, 00:12:03.503 "compare": false, 00:12:03.503 "compare_and_write": false, 00:12:03.503 "abort": true, 00:12:03.503 "seek_hole": false, 00:12:03.503 "seek_data": false, 00:12:03.503 "copy": true, 00:12:03.503 "nvme_iov_md": false 00:12:03.503 }, 00:12:03.503 "memory_domains": [ 00:12:03.503 { 00:12:03.503 "dma_device_id": "system", 00:12:03.503 "dma_device_type": 1 00:12:03.503 }, 00:12:03.503 { 00:12:03.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.503 "dma_device_type": 2 00:12:03.503 } 00:12:03.503 ], 00:12:03.503 "driver_specific": {} 00:12:03.503 } 00:12:03.503 ] 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.503 "name": "Existed_Raid", 00:12:03.503 "uuid": "24977344-c21e-44c1-b37d-d7e12a8d7613", 00:12:03.503 "strip_size_kb": 64, 00:12:03.503 "state": "online", 00:12:03.503 "raid_level": "concat", 00:12:03.503 "superblock": false, 00:12:03.503 "num_base_bdevs": 3, 00:12:03.503 "num_base_bdevs_discovered": 3, 00:12:03.503 "num_base_bdevs_operational": 3, 00:12:03.503 "base_bdevs_list": [ 00:12:03.503 { 00:12:03.503 "name": "BaseBdev1", 00:12:03.503 "uuid": "e79e7120-4216-40a9-a82d-2412bc7d4166", 00:12:03.503 "is_configured": true, 00:12:03.503 "data_offset": 0, 00:12:03.503 "data_size": 65536 00:12:03.503 }, 00:12:03.503 { 00:12:03.503 "name": "BaseBdev2", 00:12:03.503 "uuid": "87a0d534-bbbe-4f36-9229-1f243621138f", 00:12:03.503 "is_configured": true, 00:12:03.503 "data_offset": 0, 00:12:03.503 "data_size": 65536 00:12:03.503 }, 00:12:03.503 { 00:12:03.503 "name": "BaseBdev3", 00:12:03.503 "uuid": "583488c8-4e32-4f96-8dfc-286545cf6ca3", 00:12:03.503 "is_configured": true, 00:12:03.503 "data_offset": 0, 00:12:03.503 "data_size": 65536 00:12:03.503 } 00:12:03.503 ] 00:12:03.503 }' 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.503 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.762 [2024-11-26 06:21:47.811159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.762 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.762 "name": "Existed_Raid", 00:12:03.762 "aliases": [ 00:12:03.762 "24977344-c21e-44c1-b37d-d7e12a8d7613" 00:12:03.762 ], 00:12:03.762 "product_name": "Raid Volume", 00:12:03.762 "block_size": 512, 00:12:03.762 "num_blocks": 196608, 00:12:03.762 "uuid": "24977344-c21e-44c1-b37d-d7e12a8d7613", 00:12:03.762 "assigned_rate_limits": { 00:12:03.762 "rw_ios_per_sec": 0, 00:12:03.762 "rw_mbytes_per_sec": 0, 00:12:03.762 "r_mbytes_per_sec": 0, 00:12:03.762 "w_mbytes_per_sec": 0 00:12:03.762 }, 00:12:03.762 "claimed": false, 00:12:03.762 "zoned": false, 00:12:03.762 "supported_io_types": { 00:12:03.762 "read": true, 00:12:03.762 "write": true, 00:12:03.762 "unmap": true, 00:12:03.762 "flush": true, 00:12:03.762 "reset": true, 00:12:03.762 "nvme_admin": false, 00:12:03.762 "nvme_io": false, 00:12:03.762 "nvme_io_md": false, 00:12:03.762 "write_zeroes": true, 00:12:03.762 "zcopy": false, 00:12:03.762 "get_zone_info": false, 00:12:03.762 "zone_management": false, 00:12:03.762 "zone_append": false, 00:12:03.762 "compare": false, 00:12:03.762 "compare_and_write": false, 00:12:03.762 "abort": false, 00:12:03.762 "seek_hole": false, 00:12:03.762 "seek_data": false, 00:12:03.762 "copy": false, 00:12:03.762 "nvme_iov_md": false 00:12:03.762 }, 00:12:03.762 "memory_domains": [ 00:12:03.762 { 00:12:03.762 "dma_device_id": "system", 00:12:03.762 "dma_device_type": 1 00:12:03.762 }, 00:12:03.762 { 00:12:03.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.762 "dma_device_type": 2 00:12:03.762 }, 00:12:03.762 { 00:12:03.762 "dma_device_id": "system", 00:12:03.762 "dma_device_type": 1 00:12:03.763 }, 00:12:03.763 { 00:12:03.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.763 "dma_device_type": 2 00:12:03.763 }, 00:12:03.763 { 00:12:03.763 "dma_device_id": "system", 00:12:03.763 "dma_device_type": 1 00:12:03.763 }, 00:12:03.763 { 00:12:03.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.763 "dma_device_type": 2 00:12:03.763 } 00:12:03.763 ], 00:12:03.763 "driver_specific": { 00:12:03.763 "raid": { 00:12:03.763 "uuid": "24977344-c21e-44c1-b37d-d7e12a8d7613", 00:12:03.763 "strip_size_kb": 64, 00:12:03.763 "state": "online", 00:12:03.763 "raid_level": "concat", 00:12:03.763 "superblock": false, 00:12:03.763 "num_base_bdevs": 3, 00:12:03.763 "num_base_bdevs_discovered": 3, 00:12:03.763 "num_base_bdevs_operational": 3, 00:12:03.763 "base_bdevs_list": [ 00:12:03.763 { 00:12:03.763 "name": "BaseBdev1", 00:12:03.763 "uuid": "e79e7120-4216-40a9-a82d-2412bc7d4166", 00:12:03.763 "is_configured": true, 00:12:03.763 "data_offset": 0, 00:12:03.763 "data_size": 65536 00:12:03.763 }, 00:12:03.763 { 00:12:03.763 "name": "BaseBdev2", 00:12:03.763 "uuid": "87a0d534-bbbe-4f36-9229-1f243621138f", 00:12:03.763 "is_configured": true, 00:12:03.763 "data_offset": 0, 00:12:03.763 "data_size": 65536 00:12:03.763 }, 00:12:03.763 { 00:12:03.763 "name": "BaseBdev3", 00:12:03.763 "uuid": "583488c8-4e32-4f96-8dfc-286545cf6ca3", 00:12:03.763 "is_configured": true, 00:12:03.763 "data_offset": 0, 00:12:03.763 "data_size": 65536 00:12:03.763 } 00:12:03.763 ] 00:12:03.763 } 00:12:03.763 } 00:12:03.763 }' 00:12:03.763 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.763 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:03.763 BaseBdev2 00:12:03.763 BaseBdev3' 00:12:03.763 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.024 06:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.024 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.024 [2024-11-26 06:21:48.110472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.024 [2024-11-26 06:21:48.110566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.024 [2024-11-26 06:21:48.110663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.285 "name": "Existed_Raid", 00:12:04.285 "uuid": "24977344-c21e-44c1-b37d-d7e12a8d7613", 00:12:04.285 "strip_size_kb": 64, 00:12:04.285 "state": "offline", 00:12:04.285 "raid_level": "concat", 00:12:04.285 "superblock": false, 00:12:04.285 "num_base_bdevs": 3, 00:12:04.285 "num_base_bdevs_discovered": 2, 00:12:04.285 "num_base_bdevs_operational": 2, 00:12:04.285 "base_bdevs_list": [ 00:12:04.285 { 00:12:04.285 "name": null, 00:12:04.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.285 "is_configured": false, 00:12:04.285 "data_offset": 0, 00:12:04.285 "data_size": 65536 00:12:04.285 }, 00:12:04.285 { 00:12:04.285 "name": "BaseBdev2", 00:12:04.285 "uuid": "87a0d534-bbbe-4f36-9229-1f243621138f", 00:12:04.285 "is_configured": true, 00:12:04.285 "data_offset": 0, 00:12:04.285 "data_size": 65536 00:12:04.285 }, 00:12:04.285 { 00:12:04.285 "name": "BaseBdev3", 00:12:04.285 "uuid": "583488c8-4e32-4f96-8dfc-286545cf6ca3", 00:12:04.285 "is_configured": true, 00:12:04.285 "data_offset": 0, 00:12:04.285 "data_size": 65536 00:12:04.285 } 00:12:04.285 ] 00:12:04.285 }' 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.285 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.854 [2024-11-26 06:21:48.742356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.854 06:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.854 [2024-11-26 06:21:48.912361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.854 [2024-11-26 06:21:48.912493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.113 BaseBdev2 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.113 [ 00:12:05.113 { 00:12:05.113 "name": "BaseBdev2", 00:12:05.113 "aliases": [ 00:12:05.113 "e5e7a152-b54d-412f-bcd7-39425637d666" 00:12:05.113 ], 00:12:05.113 "product_name": "Malloc disk", 00:12:05.113 "block_size": 512, 00:12:05.113 "num_blocks": 65536, 00:12:05.113 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:05.113 "assigned_rate_limits": { 00:12:05.113 "rw_ios_per_sec": 0, 00:12:05.113 "rw_mbytes_per_sec": 0, 00:12:05.113 "r_mbytes_per_sec": 0, 00:12:05.113 "w_mbytes_per_sec": 0 00:12:05.113 }, 00:12:05.113 "claimed": false, 00:12:05.113 "zoned": false, 00:12:05.113 "supported_io_types": { 00:12:05.113 "read": true, 00:12:05.113 "write": true, 00:12:05.113 "unmap": true, 00:12:05.113 "flush": true, 00:12:05.113 "reset": true, 00:12:05.113 "nvme_admin": false, 00:12:05.113 "nvme_io": false, 00:12:05.113 "nvme_io_md": false, 00:12:05.113 "write_zeroes": true, 00:12:05.113 "zcopy": true, 00:12:05.113 "get_zone_info": false, 00:12:05.113 "zone_management": false, 00:12:05.113 "zone_append": false, 00:12:05.113 "compare": false, 00:12:05.113 "compare_and_write": false, 00:12:05.113 "abort": true, 00:12:05.113 "seek_hole": false, 00:12:05.113 "seek_data": false, 00:12:05.113 "copy": true, 00:12:05.113 "nvme_iov_md": false 00:12:05.113 }, 00:12:05.113 "memory_domains": [ 00:12:05.113 { 00:12:05.113 "dma_device_id": "system", 00:12:05.113 "dma_device_type": 1 00:12:05.113 }, 00:12:05.113 { 00:12:05.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.113 "dma_device_type": 2 00:12:05.113 } 00:12:05.113 ], 00:12:05.113 "driver_specific": {} 00:12:05.113 } 00:12:05.113 ] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.113 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.113 BaseBdev3 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.114 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 [ 00:12:05.372 { 00:12:05.372 "name": "BaseBdev3", 00:12:05.372 "aliases": [ 00:12:05.372 "f196662b-6502-4e2b-842c-2bbc13ff9fbb" 00:12:05.372 ], 00:12:05.372 "product_name": "Malloc disk", 00:12:05.372 "block_size": 512, 00:12:05.372 "num_blocks": 65536, 00:12:05.372 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:05.372 "assigned_rate_limits": { 00:12:05.372 "rw_ios_per_sec": 0, 00:12:05.372 "rw_mbytes_per_sec": 0, 00:12:05.372 "r_mbytes_per_sec": 0, 00:12:05.372 "w_mbytes_per_sec": 0 00:12:05.372 }, 00:12:05.372 "claimed": false, 00:12:05.372 "zoned": false, 00:12:05.372 "supported_io_types": { 00:12:05.372 "read": true, 00:12:05.372 "write": true, 00:12:05.372 "unmap": true, 00:12:05.372 "flush": true, 00:12:05.372 "reset": true, 00:12:05.372 "nvme_admin": false, 00:12:05.372 "nvme_io": false, 00:12:05.372 "nvme_io_md": false, 00:12:05.372 "write_zeroes": true, 00:12:05.372 "zcopy": true, 00:12:05.372 "get_zone_info": false, 00:12:05.372 "zone_management": false, 00:12:05.372 "zone_append": false, 00:12:05.372 "compare": false, 00:12:05.372 "compare_and_write": false, 00:12:05.372 "abort": true, 00:12:05.372 "seek_hole": false, 00:12:05.372 "seek_data": false, 00:12:05.372 "copy": true, 00:12:05.372 "nvme_iov_md": false 00:12:05.372 }, 00:12:05.372 "memory_domains": [ 00:12:05.372 { 00:12:05.372 "dma_device_id": "system", 00:12:05.372 "dma_device_type": 1 00:12:05.372 }, 00:12:05.372 { 00:12:05.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.372 "dma_device_type": 2 00:12:05.372 } 00:12:05.372 ], 00:12:05.372 "driver_specific": {} 00:12:05.372 } 00:12:05.372 ] 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 [2024-11-26 06:21:49.280837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.372 [2024-11-26 06:21:49.280970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.372 [2024-11-26 06:21:49.281034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.372 [2024-11-26 06:21:49.283633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.372 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.373 "name": "Existed_Raid", 00:12:05.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.373 "strip_size_kb": 64, 00:12:05.373 "state": "configuring", 00:12:05.373 "raid_level": "concat", 00:12:05.373 "superblock": false, 00:12:05.373 "num_base_bdevs": 3, 00:12:05.373 "num_base_bdevs_discovered": 2, 00:12:05.373 "num_base_bdevs_operational": 3, 00:12:05.373 "base_bdevs_list": [ 00:12:05.373 { 00:12:05.373 "name": "BaseBdev1", 00:12:05.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.373 "is_configured": false, 00:12:05.373 "data_offset": 0, 00:12:05.373 "data_size": 0 00:12:05.373 }, 00:12:05.373 { 00:12:05.373 "name": "BaseBdev2", 00:12:05.373 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:05.373 "is_configured": true, 00:12:05.373 "data_offset": 0, 00:12:05.373 "data_size": 65536 00:12:05.373 }, 00:12:05.373 { 00:12:05.373 "name": "BaseBdev3", 00:12:05.373 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:05.373 "is_configured": true, 00:12:05.373 "data_offset": 0, 00:12:05.373 "data_size": 65536 00:12:05.373 } 00:12:05.373 ] 00:12:05.373 }' 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.373 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.632 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.632 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.632 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.632 [2024-11-26 06:21:49.736023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.632 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.633 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.892 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.892 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.892 "name": "Existed_Raid", 00:12:05.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.892 "strip_size_kb": 64, 00:12:05.892 "state": "configuring", 00:12:05.892 "raid_level": "concat", 00:12:05.892 "superblock": false, 00:12:05.892 "num_base_bdevs": 3, 00:12:05.892 "num_base_bdevs_discovered": 1, 00:12:05.892 "num_base_bdevs_operational": 3, 00:12:05.892 "base_bdevs_list": [ 00:12:05.892 { 00:12:05.892 "name": "BaseBdev1", 00:12:05.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.892 "is_configured": false, 00:12:05.892 "data_offset": 0, 00:12:05.892 "data_size": 0 00:12:05.892 }, 00:12:05.892 { 00:12:05.892 "name": null, 00:12:05.892 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:05.892 "is_configured": false, 00:12:05.892 "data_offset": 0, 00:12:05.892 "data_size": 65536 00:12:05.892 }, 00:12:05.892 { 00:12:05.892 "name": "BaseBdev3", 00:12:05.892 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:05.892 "is_configured": true, 00:12:05.892 "data_offset": 0, 00:12:05.892 "data_size": 65536 00:12:05.892 } 00:12:05.892 ] 00:12:05.892 }' 00:12:05.892 06:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.892 06:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.150 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.151 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.411 [2024-11-26 06:21:50.304177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.411 BaseBdev1 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.411 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.411 [ 00:12:06.411 { 00:12:06.411 "name": "BaseBdev1", 00:12:06.411 "aliases": [ 00:12:06.411 "b8b35e6f-597b-4be7-8994-cee0381a990c" 00:12:06.411 ], 00:12:06.411 "product_name": "Malloc disk", 00:12:06.411 "block_size": 512, 00:12:06.411 "num_blocks": 65536, 00:12:06.411 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:06.411 "assigned_rate_limits": { 00:12:06.412 "rw_ios_per_sec": 0, 00:12:06.412 "rw_mbytes_per_sec": 0, 00:12:06.412 "r_mbytes_per_sec": 0, 00:12:06.412 "w_mbytes_per_sec": 0 00:12:06.412 }, 00:12:06.412 "claimed": true, 00:12:06.412 "claim_type": "exclusive_write", 00:12:06.412 "zoned": false, 00:12:06.412 "supported_io_types": { 00:12:06.412 "read": true, 00:12:06.412 "write": true, 00:12:06.412 "unmap": true, 00:12:06.412 "flush": true, 00:12:06.412 "reset": true, 00:12:06.412 "nvme_admin": false, 00:12:06.412 "nvme_io": false, 00:12:06.412 "nvme_io_md": false, 00:12:06.412 "write_zeroes": true, 00:12:06.412 "zcopy": true, 00:12:06.412 "get_zone_info": false, 00:12:06.412 "zone_management": false, 00:12:06.412 "zone_append": false, 00:12:06.412 "compare": false, 00:12:06.412 "compare_and_write": false, 00:12:06.412 "abort": true, 00:12:06.412 "seek_hole": false, 00:12:06.412 "seek_data": false, 00:12:06.412 "copy": true, 00:12:06.412 "nvme_iov_md": false 00:12:06.412 }, 00:12:06.412 "memory_domains": [ 00:12:06.412 { 00:12:06.412 "dma_device_id": "system", 00:12:06.412 "dma_device_type": 1 00:12:06.412 }, 00:12:06.412 { 00:12:06.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.412 "dma_device_type": 2 00:12:06.412 } 00:12:06.412 ], 00:12:06.412 "driver_specific": {} 00:12:06.412 } 00:12:06.412 ] 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.412 "name": "Existed_Raid", 00:12:06.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.412 "strip_size_kb": 64, 00:12:06.412 "state": "configuring", 00:12:06.412 "raid_level": "concat", 00:12:06.412 "superblock": false, 00:12:06.412 "num_base_bdevs": 3, 00:12:06.412 "num_base_bdevs_discovered": 2, 00:12:06.412 "num_base_bdevs_operational": 3, 00:12:06.412 "base_bdevs_list": [ 00:12:06.412 { 00:12:06.412 "name": "BaseBdev1", 00:12:06.412 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:06.412 "is_configured": true, 00:12:06.412 "data_offset": 0, 00:12:06.412 "data_size": 65536 00:12:06.412 }, 00:12:06.412 { 00:12:06.412 "name": null, 00:12:06.412 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:06.412 "is_configured": false, 00:12:06.412 "data_offset": 0, 00:12:06.412 "data_size": 65536 00:12:06.412 }, 00:12:06.412 { 00:12:06.412 "name": "BaseBdev3", 00:12:06.412 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:06.412 "is_configured": true, 00:12:06.412 "data_offset": 0, 00:12:06.412 "data_size": 65536 00:12:06.412 } 00:12:06.412 ] 00:12:06.412 }' 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.412 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.982 [2024-11-26 06:21:50.879278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.982 "name": "Existed_Raid", 00:12:06.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.982 "strip_size_kb": 64, 00:12:06.982 "state": "configuring", 00:12:06.982 "raid_level": "concat", 00:12:06.982 "superblock": false, 00:12:06.982 "num_base_bdevs": 3, 00:12:06.982 "num_base_bdevs_discovered": 1, 00:12:06.982 "num_base_bdevs_operational": 3, 00:12:06.982 "base_bdevs_list": [ 00:12:06.982 { 00:12:06.982 "name": "BaseBdev1", 00:12:06.982 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:06.982 "is_configured": true, 00:12:06.982 "data_offset": 0, 00:12:06.982 "data_size": 65536 00:12:06.982 }, 00:12:06.982 { 00:12:06.982 "name": null, 00:12:06.982 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:06.982 "is_configured": false, 00:12:06.982 "data_offset": 0, 00:12:06.982 "data_size": 65536 00:12:06.982 }, 00:12:06.982 { 00:12:06.982 "name": null, 00:12:06.982 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:06.982 "is_configured": false, 00:12:06.982 "data_offset": 0, 00:12:06.982 "data_size": 65536 00:12:06.982 } 00:12:06.982 ] 00:12:06.982 }' 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.982 06:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.251 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.251 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.251 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.251 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.251 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.510 [2024-11-26 06:21:51.398410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.510 "name": "Existed_Raid", 00:12:07.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.510 "strip_size_kb": 64, 00:12:07.510 "state": "configuring", 00:12:07.510 "raid_level": "concat", 00:12:07.510 "superblock": false, 00:12:07.510 "num_base_bdevs": 3, 00:12:07.510 "num_base_bdevs_discovered": 2, 00:12:07.510 "num_base_bdevs_operational": 3, 00:12:07.510 "base_bdevs_list": [ 00:12:07.510 { 00:12:07.510 "name": "BaseBdev1", 00:12:07.510 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 0, 00:12:07.510 "data_size": 65536 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": null, 00:12:07.510 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:07.510 "is_configured": false, 00:12:07.510 "data_offset": 0, 00:12:07.510 "data_size": 65536 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": "BaseBdev3", 00:12:07.510 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 0, 00:12:07.510 "data_size": 65536 00:12:07.510 } 00:12:07.510 ] 00:12:07.510 }' 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.510 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.770 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.770 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.770 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.770 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.770 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.030 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:08.030 06:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:08.030 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.030 06:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.030 [2024-11-26 06:21:51.909607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.030 "name": "Existed_Raid", 00:12:08.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.030 "strip_size_kb": 64, 00:12:08.030 "state": "configuring", 00:12:08.030 "raid_level": "concat", 00:12:08.030 "superblock": false, 00:12:08.030 "num_base_bdevs": 3, 00:12:08.030 "num_base_bdevs_discovered": 1, 00:12:08.030 "num_base_bdevs_operational": 3, 00:12:08.030 "base_bdevs_list": [ 00:12:08.030 { 00:12:08.030 "name": null, 00:12:08.030 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:08.030 "is_configured": false, 00:12:08.030 "data_offset": 0, 00:12:08.030 "data_size": 65536 00:12:08.030 }, 00:12:08.030 { 00:12:08.030 "name": null, 00:12:08.030 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:08.030 "is_configured": false, 00:12:08.030 "data_offset": 0, 00:12:08.030 "data_size": 65536 00:12:08.030 }, 00:12:08.030 { 00:12:08.030 "name": "BaseBdev3", 00:12:08.030 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:08.030 "is_configured": true, 00:12:08.030 "data_offset": 0, 00:12:08.030 "data_size": 65536 00:12:08.030 } 00:12:08.030 ] 00:12:08.030 }' 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.030 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.599 [2024-11-26 06:21:52.511998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:08.599 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.600 "name": "Existed_Raid", 00:12:08.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.600 "strip_size_kb": 64, 00:12:08.600 "state": "configuring", 00:12:08.600 "raid_level": "concat", 00:12:08.600 "superblock": false, 00:12:08.600 "num_base_bdevs": 3, 00:12:08.600 "num_base_bdevs_discovered": 2, 00:12:08.600 "num_base_bdevs_operational": 3, 00:12:08.600 "base_bdevs_list": [ 00:12:08.600 { 00:12:08.600 "name": null, 00:12:08.600 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:08.600 "is_configured": false, 00:12:08.600 "data_offset": 0, 00:12:08.600 "data_size": 65536 00:12:08.600 }, 00:12:08.600 { 00:12:08.600 "name": "BaseBdev2", 00:12:08.600 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:08.600 "is_configured": true, 00:12:08.600 "data_offset": 0, 00:12:08.600 "data_size": 65536 00:12:08.600 }, 00:12:08.600 { 00:12:08.600 "name": "BaseBdev3", 00:12:08.600 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:08.600 "is_configured": true, 00:12:08.600 "data_offset": 0, 00:12:08.600 "data_size": 65536 00:12:08.600 } 00:12:08.600 ] 00:12:08.600 }' 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.600 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.859 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.859 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.859 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.859 06:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.118 06:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8b35e6f-597b-4be7-8994-cee0381a990c 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.118 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.118 [2024-11-26 06:21:53.115935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:09.118 [2024-11-26 06:21:53.116011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:09.118 [2024-11-26 06:21:53.116023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:09.118 [2024-11-26 06:21:53.116431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:09.118 [2024-11-26 06:21:53.116629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:09.118 [2024-11-26 06:21:53.116648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:09.119 [2024-11-26 06:21:53.117081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.119 NewBaseBdev 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.119 [ 00:12:09.119 { 00:12:09.119 "name": "NewBaseBdev", 00:12:09.119 "aliases": [ 00:12:09.119 "b8b35e6f-597b-4be7-8994-cee0381a990c" 00:12:09.119 ], 00:12:09.119 "product_name": "Malloc disk", 00:12:09.119 "block_size": 512, 00:12:09.119 "num_blocks": 65536, 00:12:09.119 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:09.119 "assigned_rate_limits": { 00:12:09.119 "rw_ios_per_sec": 0, 00:12:09.119 "rw_mbytes_per_sec": 0, 00:12:09.119 "r_mbytes_per_sec": 0, 00:12:09.119 "w_mbytes_per_sec": 0 00:12:09.119 }, 00:12:09.119 "claimed": true, 00:12:09.119 "claim_type": "exclusive_write", 00:12:09.119 "zoned": false, 00:12:09.119 "supported_io_types": { 00:12:09.119 "read": true, 00:12:09.119 "write": true, 00:12:09.119 "unmap": true, 00:12:09.119 "flush": true, 00:12:09.119 "reset": true, 00:12:09.119 "nvme_admin": false, 00:12:09.119 "nvme_io": false, 00:12:09.119 "nvme_io_md": false, 00:12:09.119 "write_zeroes": true, 00:12:09.119 "zcopy": true, 00:12:09.119 "get_zone_info": false, 00:12:09.119 "zone_management": false, 00:12:09.119 "zone_append": false, 00:12:09.119 "compare": false, 00:12:09.119 "compare_and_write": false, 00:12:09.119 "abort": true, 00:12:09.119 "seek_hole": false, 00:12:09.119 "seek_data": false, 00:12:09.119 "copy": true, 00:12:09.119 "nvme_iov_md": false 00:12:09.119 }, 00:12:09.119 "memory_domains": [ 00:12:09.119 { 00:12:09.119 "dma_device_id": "system", 00:12:09.119 "dma_device_type": 1 00:12:09.119 }, 00:12:09.119 { 00:12:09.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.119 "dma_device_type": 2 00:12:09.119 } 00:12:09.119 ], 00:12:09.119 "driver_specific": {} 00:12:09.119 } 00:12:09.119 ] 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.119 "name": "Existed_Raid", 00:12:09.119 "uuid": "08f92b60-8da0-470d-8fa2-8aa2c2f732e4", 00:12:09.119 "strip_size_kb": 64, 00:12:09.119 "state": "online", 00:12:09.119 "raid_level": "concat", 00:12:09.119 "superblock": false, 00:12:09.119 "num_base_bdevs": 3, 00:12:09.119 "num_base_bdevs_discovered": 3, 00:12:09.119 "num_base_bdevs_operational": 3, 00:12:09.119 "base_bdevs_list": [ 00:12:09.119 { 00:12:09.119 "name": "NewBaseBdev", 00:12:09.119 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:09.119 "is_configured": true, 00:12:09.119 "data_offset": 0, 00:12:09.119 "data_size": 65536 00:12:09.119 }, 00:12:09.119 { 00:12:09.119 "name": "BaseBdev2", 00:12:09.119 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:09.119 "is_configured": true, 00:12:09.119 "data_offset": 0, 00:12:09.119 "data_size": 65536 00:12:09.119 }, 00:12:09.119 { 00:12:09.119 "name": "BaseBdev3", 00:12:09.119 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:09.119 "is_configured": true, 00:12:09.119 "data_offset": 0, 00:12:09.119 "data_size": 65536 00:12:09.119 } 00:12:09.119 ] 00:12:09.119 }' 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.119 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.733 [2024-11-26 06:21:53.635497] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.733 "name": "Existed_Raid", 00:12:09.733 "aliases": [ 00:12:09.733 "08f92b60-8da0-470d-8fa2-8aa2c2f732e4" 00:12:09.733 ], 00:12:09.733 "product_name": "Raid Volume", 00:12:09.733 "block_size": 512, 00:12:09.733 "num_blocks": 196608, 00:12:09.733 "uuid": "08f92b60-8da0-470d-8fa2-8aa2c2f732e4", 00:12:09.733 "assigned_rate_limits": { 00:12:09.733 "rw_ios_per_sec": 0, 00:12:09.733 "rw_mbytes_per_sec": 0, 00:12:09.733 "r_mbytes_per_sec": 0, 00:12:09.733 "w_mbytes_per_sec": 0 00:12:09.733 }, 00:12:09.733 "claimed": false, 00:12:09.733 "zoned": false, 00:12:09.733 "supported_io_types": { 00:12:09.733 "read": true, 00:12:09.733 "write": true, 00:12:09.733 "unmap": true, 00:12:09.733 "flush": true, 00:12:09.733 "reset": true, 00:12:09.733 "nvme_admin": false, 00:12:09.733 "nvme_io": false, 00:12:09.733 "nvme_io_md": false, 00:12:09.733 "write_zeroes": true, 00:12:09.733 "zcopy": false, 00:12:09.733 "get_zone_info": false, 00:12:09.733 "zone_management": false, 00:12:09.733 "zone_append": false, 00:12:09.733 "compare": false, 00:12:09.733 "compare_and_write": false, 00:12:09.733 "abort": false, 00:12:09.733 "seek_hole": false, 00:12:09.733 "seek_data": false, 00:12:09.733 "copy": false, 00:12:09.733 "nvme_iov_md": false 00:12:09.733 }, 00:12:09.733 "memory_domains": [ 00:12:09.733 { 00:12:09.733 "dma_device_id": "system", 00:12:09.733 "dma_device_type": 1 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.733 "dma_device_type": 2 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "dma_device_id": "system", 00:12:09.733 "dma_device_type": 1 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.733 "dma_device_type": 2 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "dma_device_id": "system", 00:12:09.733 "dma_device_type": 1 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.733 "dma_device_type": 2 00:12:09.733 } 00:12:09.733 ], 00:12:09.733 "driver_specific": { 00:12:09.733 "raid": { 00:12:09.733 "uuid": "08f92b60-8da0-470d-8fa2-8aa2c2f732e4", 00:12:09.733 "strip_size_kb": 64, 00:12:09.733 "state": "online", 00:12:09.733 "raid_level": "concat", 00:12:09.733 "superblock": false, 00:12:09.733 "num_base_bdevs": 3, 00:12:09.733 "num_base_bdevs_discovered": 3, 00:12:09.733 "num_base_bdevs_operational": 3, 00:12:09.733 "base_bdevs_list": [ 00:12:09.733 { 00:12:09.733 "name": "NewBaseBdev", 00:12:09.733 "uuid": "b8b35e6f-597b-4be7-8994-cee0381a990c", 00:12:09.733 "is_configured": true, 00:12:09.733 "data_offset": 0, 00:12:09.733 "data_size": 65536 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "name": "BaseBdev2", 00:12:09.733 "uuid": "e5e7a152-b54d-412f-bcd7-39425637d666", 00:12:09.733 "is_configured": true, 00:12:09.733 "data_offset": 0, 00:12:09.733 "data_size": 65536 00:12:09.733 }, 00:12:09.733 { 00:12:09.733 "name": "BaseBdev3", 00:12:09.733 "uuid": "f196662b-6502-4e2b-842c-2bbc13ff9fbb", 00:12:09.733 "is_configured": true, 00:12:09.733 "data_offset": 0, 00:12:09.733 "data_size": 65536 00:12:09.733 } 00:12:09.733 ] 00:12:09.733 } 00:12:09.733 } 00:12:09.733 }' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.733 BaseBdev2 00:12:09.733 BaseBdev3' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.733 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.734 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.001 [2024-11-26 06:21:53.866782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.001 [2024-11-26 06:21:53.866820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.001 [2024-11-26 06:21:53.866940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.001 [2024-11-26 06:21:53.867011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.001 [2024-11-26 06:21:53.867026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66012 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66012 ']' 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66012 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66012 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.001 killing process with pid 66012 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66012' 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66012 00:12:10.001 [2024-11-26 06:21:53.906036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.001 06:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66012 00:12:10.260 [2024-11-26 06:21:54.293848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.638 ************************************ 00:12:11.638 END TEST raid_state_function_test 00:12:11.638 ************************************ 00:12:11.638 00:12:11.638 real 0m11.267s 00:12:11.638 user 0m17.551s 00:12:11.638 sys 0m2.092s 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.638 06:21:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:11.638 06:21:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:11.638 06:21:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.638 06:21:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.638 ************************************ 00:12:11.638 START TEST raid_state_function_test_sb 00:12:11.638 ************************************ 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66646 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:11.638 Process raid pid: 66646 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66646' 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66646 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66646 ']' 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.638 06:21:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.638 [2024-11-26 06:21:55.751488] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:11.638 [2024-11-26 06:21:55.751619] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.897 [2024-11-26 06:21:55.940116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.156 [2024-11-26 06:21:56.081366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.440 [2024-11-26 06:21:56.334432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.440 [2024-11-26 06:21:56.334498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 [2024-11-26 06:21:56.611685] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.710 [2024-11-26 06:21:56.611757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.710 [2024-11-26 06:21:56.611769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.710 [2024-11-26 06:21:56.611780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.710 [2024-11-26 06:21:56.611787] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.710 [2024-11-26 06:21:56.611796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.710 "name": "Existed_Raid", 00:12:12.710 "uuid": "25bc134a-210f-4dd8-9cb6-749b326856b8", 00:12:12.710 "strip_size_kb": 64, 00:12:12.710 "state": "configuring", 00:12:12.710 "raid_level": "concat", 00:12:12.710 "superblock": true, 00:12:12.710 "num_base_bdevs": 3, 00:12:12.710 "num_base_bdevs_discovered": 0, 00:12:12.710 "num_base_bdevs_operational": 3, 00:12:12.710 "base_bdevs_list": [ 00:12:12.710 { 00:12:12.710 "name": "BaseBdev1", 00:12:12.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.710 "is_configured": false, 00:12:12.710 "data_offset": 0, 00:12:12.710 "data_size": 0 00:12:12.710 }, 00:12:12.710 { 00:12:12.710 "name": "BaseBdev2", 00:12:12.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.710 "is_configured": false, 00:12:12.710 "data_offset": 0, 00:12:12.710 "data_size": 0 00:12:12.710 }, 00:12:12.710 { 00:12:12.710 "name": "BaseBdev3", 00:12:12.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.710 "is_configured": false, 00:12:12.710 "data_offset": 0, 00:12:12.710 "data_size": 0 00:12:12.710 } 00:12:12.710 ] 00:12:12.710 }' 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.710 06:21:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.970 [2024-11-26 06:21:57.086776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:12.970 [2024-11-26 06:21:57.086902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.970 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.970 [2024-11-26 06:21:57.098745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.970 [2024-11-26 06:21:57.098840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.970 [2024-11-26 06:21:57.098869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.970 [2024-11-26 06:21:57.098894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.970 [2024-11-26 06:21:57.098913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:12.970 [2024-11-26 06:21:57.098935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.231 [2024-11-26 06:21:57.155781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.231 BaseBdev1 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.231 [ 00:12:13.231 { 00:12:13.231 "name": "BaseBdev1", 00:12:13.231 "aliases": [ 00:12:13.231 "f1672a58-5f4b-43eb-986f-263d090689d3" 00:12:13.231 ], 00:12:13.231 "product_name": "Malloc disk", 00:12:13.231 "block_size": 512, 00:12:13.231 "num_blocks": 65536, 00:12:13.231 "uuid": "f1672a58-5f4b-43eb-986f-263d090689d3", 00:12:13.231 "assigned_rate_limits": { 00:12:13.231 "rw_ios_per_sec": 0, 00:12:13.231 "rw_mbytes_per_sec": 0, 00:12:13.231 "r_mbytes_per_sec": 0, 00:12:13.231 "w_mbytes_per_sec": 0 00:12:13.231 }, 00:12:13.231 "claimed": true, 00:12:13.231 "claim_type": "exclusive_write", 00:12:13.231 "zoned": false, 00:12:13.231 "supported_io_types": { 00:12:13.231 "read": true, 00:12:13.231 "write": true, 00:12:13.231 "unmap": true, 00:12:13.231 "flush": true, 00:12:13.231 "reset": true, 00:12:13.231 "nvme_admin": false, 00:12:13.231 "nvme_io": false, 00:12:13.231 "nvme_io_md": false, 00:12:13.231 "write_zeroes": true, 00:12:13.231 "zcopy": true, 00:12:13.231 "get_zone_info": false, 00:12:13.231 "zone_management": false, 00:12:13.231 "zone_append": false, 00:12:13.231 "compare": false, 00:12:13.231 "compare_and_write": false, 00:12:13.231 "abort": true, 00:12:13.231 "seek_hole": false, 00:12:13.231 "seek_data": false, 00:12:13.231 "copy": true, 00:12:13.231 "nvme_iov_md": false 00:12:13.231 }, 00:12:13.231 "memory_domains": [ 00:12:13.231 { 00:12:13.231 "dma_device_id": "system", 00:12:13.231 "dma_device_type": 1 00:12:13.231 }, 00:12:13.231 { 00:12:13.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.231 "dma_device_type": 2 00:12:13.231 } 00:12:13.231 ], 00:12:13.231 "driver_specific": {} 00:12:13.231 } 00:12:13.231 ] 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.231 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.232 "name": "Existed_Raid", 00:12:13.232 "uuid": "f66847be-4f29-4b60-bace-bf96472a82ac", 00:12:13.232 "strip_size_kb": 64, 00:12:13.232 "state": "configuring", 00:12:13.232 "raid_level": "concat", 00:12:13.232 "superblock": true, 00:12:13.232 "num_base_bdevs": 3, 00:12:13.232 "num_base_bdevs_discovered": 1, 00:12:13.232 "num_base_bdevs_operational": 3, 00:12:13.232 "base_bdevs_list": [ 00:12:13.232 { 00:12:13.232 "name": "BaseBdev1", 00:12:13.232 "uuid": "f1672a58-5f4b-43eb-986f-263d090689d3", 00:12:13.232 "is_configured": true, 00:12:13.232 "data_offset": 2048, 00:12:13.232 "data_size": 63488 00:12:13.232 }, 00:12:13.232 { 00:12:13.232 "name": "BaseBdev2", 00:12:13.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.232 "is_configured": false, 00:12:13.232 "data_offset": 0, 00:12:13.232 "data_size": 0 00:12:13.232 }, 00:12:13.232 { 00:12:13.232 "name": "BaseBdev3", 00:12:13.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.232 "is_configured": false, 00:12:13.232 "data_offset": 0, 00:12:13.232 "data_size": 0 00:12:13.232 } 00:12:13.232 ] 00:12:13.232 }' 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.232 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.801 [2024-11-26 06:21:57.655044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.801 [2024-11-26 06:21:57.655184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.801 [2024-11-26 06:21:57.663101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.801 [2024-11-26 06:21:57.665489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.801 [2024-11-26 06:21:57.665572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.801 [2024-11-26 06:21:57.665604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:13.801 [2024-11-26 06:21:57.665628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.801 "name": "Existed_Raid", 00:12:13.801 "uuid": "e7772c6f-c67c-4e34-a2ea-4c034affef0c", 00:12:13.801 "strip_size_kb": 64, 00:12:13.801 "state": "configuring", 00:12:13.801 "raid_level": "concat", 00:12:13.801 "superblock": true, 00:12:13.801 "num_base_bdevs": 3, 00:12:13.801 "num_base_bdevs_discovered": 1, 00:12:13.801 "num_base_bdevs_operational": 3, 00:12:13.801 "base_bdevs_list": [ 00:12:13.801 { 00:12:13.801 "name": "BaseBdev1", 00:12:13.801 "uuid": "f1672a58-5f4b-43eb-986f-263d090689d3", 00:12:13.801 "is_configured": true, 00:12:13.801 "data_offset": 2048, 00:12:13.801 "data_size": 63488 00:12:13.801 }, 00:12:13.801 { 00:12:13.801 "name": "BaseBdev2", 00:12:13.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.801 "is_configured": false, 00:12:13.801 "data_offset": 0, 00:12:13.801 "data_size": 0 00:12:13.801 }, 00:12:13.801 { 00:12:13.801 "name": "BaseBdev3", 00:12:13.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.801 "is_configured": false, 00:12:13.801 "data_offset": 0, 00:12:13.801 "data_size": 0 00:12:13.801 } 00:12:13.801 ] 00:12:13.801 }' 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.801 06:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.061 [2024-11-26 06:21:58.159859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.061 BaseBdev2 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.061 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.061 [ 00:12:14.061 { 00:12:14.061 "name": "BaseBdev2", 00:12:14.061 "aliases": [ 00:12:14.061 "c1072120-bf71-4e12-be4a-f88bf8b1dc40" 00:12:14.061 ], 00:12:14.061 "product_name": "Malloc disk", 00:12:14.061 "block_size": 512, 00:12:14.061 "num_blocks": 65536, 00:12:14.061 "uuid": "c1072120-bf71-4e12-be4a-f88bf8b1dc40", 00:12:14.061 "assigned_rate_limits": { 00:12:14.061 "rw_ios_per_sec": 0, 00:12:14.061 "rw_mbytes_per_sec": 0, 00:12:14.061 "r_mbytes_per_sec": 0, 00:12:14.320 "w_mbytes_per_sec": 0 00:12:14.320 }, 00:12:14.320 "claimed": true, 00:12:14.320 "claim_type": "exclusive_write", 00:12:14.320 "zoned": false, 00:12:14.320 "supported_io_types": { 00:12:14.320 "read": true, 00:12:14.320 "write": true, 00:12:14.320 "unmap": true, 00:12:14.320 "flush": true, 00:12:14.320 "reset": true, 00:12:14.320 "nvme_admin": false, 00:12:14.320 "nvme_io": false, 00:12:14.320 "nvme_io_md": false, 00:12:14.320 "write_zeroes": true, 00:12:14.320 "zcopy": true, 00:12:14.320 "get_zone_info": false, 00:12:14.320 "zone_management": false, 00:12:14.320 "zone_append": false, 00:12:14.320 "compare": false, 00:12:14.320 "compare_and_write": false, 00:12:14.320 "abort": true, 00:12:14.320 "seek_hole": false, 00:12:14.320 "seek_data": false, 00:12:14.320 "copy": true, 00:12:14.320 "nvme_iov_md": false 00:12:14.320 }, 00:12:14.320 "memory_domains": [ 00:12:14.320 { 00:12:14.320 "dma_device_id": "system", 00:12:14.320 "dma_device_type": 1 00:12:14.320 }, 00:12:14.320 { 00:12:14.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.320 "dma_device_type": 2 00:12:14.320 } 00:12:14.320 ], 00:12:14.320 "driver_specific": {} 00:12:14.320 } 00:12:14.320 ] 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.320 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.320 "name": "Existed_Raid", 00:12:14.320 "uuid": "e7772c6f-c67c-4e34-a2ea-4c034affef0c", 00:12:14.320 "strip_size_kb": 64, 00:12:14.320 "state": "configuring", 00:12:14.320 "raid_level": "concat", 00:12:14.320 "superblock": true, 00:12:14.321 "num_base_bdevs": 3, 00:12:14.321 "num_base_bdevs_discovered": 2, 00:12:14.321 "num_base_bdevs_operational": 3, 00:12:14.321 "base_bdevs_list": [ 00:12:14.321 { 00:12:14.321 "name": "BaseBdev1", 00:12:14.321 "uuid": "f1672a58-5f4b-43eb-986f-263d090689d3", 00:12:14.321 "is_configured": true, 00:12:14.321 "data_offset": 2048, 00:12:14.321 "data_size": 63488 00:12:14.321 }, 00:12:14.321 { 00:12:14.321 "name": "BaseBdev2", 00:12:14.321 "uuid": "c1072120-bf71-4e12-be4a-f88bf8b1dc40", 00:12:14.321 "is_configured": true, 00:12:14.321 "data_offset": 2048, 00:12:14.321 "data_size": 63488 00:12:14.321 }, 00:12:14.321 { 00:12:14.321 "name": "BaseBdev3", 00:12:14.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.321 "is_configured": false, 00:12:14.321 "data_offset": 0, 00:12:14.321 "data_size": 0 00:12:14.321 } 00:12:14.321 ] 00:12:14.321 }' 00:12:14.321 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.321 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.580 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.580 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.580 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.839 [2024-11-26 06:21:58.740675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.839 [2024-11-26 06:21:58.741010] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.839 [2024-11-26 06:21:58.741038] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:14.839 [2024-11-26 06:21:58.741406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:14.839 [2024-11-26 06:21:58.741578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.839 [2024-11-26 06:21:58.741589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:14.839 BaseBdev3 00:12:14.839 [2024-11-26 06:21:58.741817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.839 [ 00:12:14.839 { 00:12:14.839 "name": "BaseBdev3", 00:12:14.839 "aliases": [ 00:12:14.839 "7ee67279-f826-4e0a-974d-4d828d6bb890" 00:12:14.839 ], 00:12:14.839 "product_name": "Malloc disk", 00:12:14.839 "block_size": 512, 00:12:14.839 "num_blocks": 65536, 00:12:14.839 "uuid": "7ee67279-f826-4e0a-974d-4d828d6bb890", 00:12:14.839 "assigned_rate_limits": { 00:12:14.839 "rw_ios_per_sec": 0, 00:12:14.839 "rw_mbytes_per_sec": 0, 00:12:14.839 "r_mbytes_per_sec": 0, 00:12:14.839 "w_mbytes_per_sec": 0 00:12:14.839 }, 00:12:14.839 "claimed": true, 00:12:14.839 "claim_type": "exclusive_write", 00:12:14.839 "zoned": false, 00:12:14.839 "supported_io_types": { 00:12:14.839 "read": true, 00:12:14.839 "write": true, 00:12:14.839 "unmap": true, 00:12:14.839 "flush": true, 00:12:14.839 "reset": true, 00:12:14.839 "nvme_admin": false, 00:12:14.839 "nvme_io": false, 00:12:14.839 "nvme_io_md": false, 00:12:14.839 "write_zeroes": true, 00:12:14.839 "zcopy": true, 00:12:14.839 "get_zone_info": false, 00:12:14.839 "zone_management": false, 00:12:14.839 "zone_append": false, 00:12:14.839 "compare": false, 00:12:14.839 "compare_and_write": false, 00:12:14.839 "abort": true, 00:12:14.839 "seek_hole": false, 00:12:14.839 "seek_data": false, 00:12:14.839 "copy": true, 00:12:14.839 "nvme_iov_md": false 00:12:14.839 }, 00:12:14.839 "memory_domains": [ 00:12:14.839 { 00:12:14.839 "dma_device_id": "system", 00:12:14.839 "dma_device_type": 1 00:12:14.839 }, 00:12:14.839 { 00:12:14.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.839 "dma_device_type": 2 00:12:14.839 } 00:12:14.839 ], 00:12:14.839 "driver_specific": {} 00:12:14.839 } 00:12:14.839 ] 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.839 "name": "Existed_Raid", 00:12:14.839 "uuid": "e7772c6f-c67c-4e34-a2ea-4c034affef0c", 00:12:14.839 "strip_size_kb": 64, 00:12:14.839 "state": "online", 00:12:14.839 "raid_level": "concat", 00:12:14.839 "superblock": true, 00:12:14.839 "num_base_bdevs": 3, 00:12:14.839 "num_base_bdevs_discovered": 3, 00:12:14.839 "num_base_bdevs_operational": 3, 00:12:14.839 "base_bdevs_list": [ 00:12:14.839 { 00:12:14.839 "name": "BaseBdev1", 00:12:14.839 "uuid": "f1672a58-5f4b-43eb-986f-263d090689d3", 00:12:14.839 "is_configured": true, 00:12:14.839 "data_offset": 2048, 00:12:14.839 "data_size": 63488 00:12:14.839 }, 00:12:14.839 { 00:12:14.839 "name": "BaseBdev2", 00:12:14.839 "uuid": "c1072120-bf71-4e12-be4a-f88bf8b1dc40", 00:12:14.839 "is_configured": true, 00:12:14.839 "data_offset": 2048, 00:12:14.839 "data_size": 63488 00:12:14.839 }, 00:12:14.839 { 00:12:14.839 "name": "BaseBdev3", 00:12:14.839 "uuid": "7ee67279-f826-4e0a-974d-4d828d6bb890", 00:12:14.839 "is_configured": true, 00:12:14.839 "data_offset": 2048, 00:12:14.839 "data_size": 63488 00:12:14.839 } 00:12:14.839 ] 00:12:14.839 }' 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.839 06:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.098 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.358 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:15.358 [2024-11-26 06:21:59.236427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.358 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.358 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:15.358 "name": "Existed_Raid", 00:12:15.358 "aliases": [ 00:12:15.358 "e7772c6f-c67c-4e34-a2ea-4c034affef0c" 00:12:15.358 ], 00:12:15.358 "product_name": "Raid Volume", 00:12:15.358 "block_size": 512, 00:12:15.358 "num_blocks": 190464, 00:12:15.358 "uuid": "e7772c6f-c67c-4e34-a2ea-4c034affef0c", 00:12:15.358 "assigned_rate_limits": { 00:12:15.358 "rw_ios_per_sec": 0, 00:12:15.358 "rw_mbytes_per_sec": 0, 00:12:15.358 "r_mbytes_per_sec": 0, 00:12:15.358 "w_mbytes_per_sec": 0 00:12:15.358 }, 00:12:15.358 "claimed": false, 00:12:15.358 "zoned": false, 00:12:15.358 "supported_io_types": { 00:12:15.358 "read": true, 00:12:15.358 "write": true, 00:12:15.358 "unmap": true, 00:12:15.358 "flush": true, 00:12:15.358 "reset": true, 00:12:15.358 "nvme_admin": false, 00:12:15.358 "nvme_io": false, 00:12:15.358 "nvme_io_md": false, 00:12:15.358 "write_zeroes": true, 00:12:15.358 "zcopy": false, 00:12:15.358 "get_zone_info": false, 00:12:15.358 "zone_management": false, 00:12:15.359 "zone_append": false, 00:12:15.359 "compare": false, 00:12:15.359 "compare_and_write": false, 00:12:15.359 "abort": false, 00:12:15.359 "seek_hole": false, 00:12:15.359 "seek_data": false, 00:12:15.359 "copy": false, 00:12:15.359 "nvme_iov_md": false 00:12:15.359 }, 00:12:15.359 "memory_domains": [ 00:12:15.359 { 00:12:15.359 "dma_device_id": "system", 00:12:15.359 "dma_device_type": 1 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.359 "dma_device_type": 2 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "dma_device_id": "system", 00:12:15.359 "dma_device_type": 1 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.359 "dma_device_type": 2 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "dma_device_id": "system", 00:12:15.359 "dma_device_type": 1 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.359 "dma_device_type": 2 00:12:15.359 } 00:12:15.359 ], 00:12:15.359 "driver_specific": { 00:12:15.359 "raid": { 00:12:15.359 "uuid": "e7772c6f-c67c-4e34-a2ea-4c034affef0c", 00:12:15.359 "strip_size_kb": 64, 00:12:15.359 "state": "online", 00:12:15.359 "raid_level": "concat", 00:12:15.359 "superblock": true, 00:12:15.359 "num_base_bdevs": 3, 00:12:15.359 "num_base_bdevs_discovered": 3, 00:12:15.359 "num_base_bdevs_operational": 3, 00:12:15.359 "base_bdevs_list": [ 00:12:15.359 { 00:12:15.359 "name": "BaseBdev1", 00:12:15.359 "uuid": "f1672a58-5f4b-43eb-986f-263d090689d3", 00:12:15.359 "is_configured": true, 00:12:15.359 "data_offset": 2048, 00:12:15.359 "data_size": 63488 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "name": "BaseBdev2", 00:12:15.359 "uuid": "c1072120-bf71-4e12-be4a-f88bf8b1dc40", 00:12:15.359 "is_configured": true, 00:12:15.359 "data_offset": 2048, 00:12:15.359 "data_size": 63488 00:12:15.359 }, 00:12:15.359 { 00:12:15.359 "name": "BaseBdev3", 00:12:15.359 "uuid": "7ee67279-f826-4e0a-974d-4d828d6bb890", 00:12:15.359 "is_configured": true, 00:12:15.359 "data_offset": 2048, 00:12:15.359 "data_size": 63488 00:12:15.359 } 00:12:15.359 ] 00:12:15.359 } 00:12:15.359 } 00:12:15.359 }' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:15.359 BaseBdev2 00:12:15.359 BaseBdev3' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.359 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.618 [2024-11-26 06:21:59.535741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.618 [2024-11-26 06:21:59.535782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.618 [2024-11-26 06:21:59.535854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.618 "name": "Existed_Raid", 00:12:15.618 "uuid": "e7772c6f-c67c-4e34-a2ea-4c034affef0c", 00:12:15.618 "strip_size_kb": 64, 00:12:15.618 "state": "offline", 00:12:15.618 "raid_level": "concat", 00:12:15.618 "superblock": true, 00:12:15.618 "num_base_bdevs": 3, 00:12:15.618 "num_base_bdevs_discovered": 2, 00:12:15.618 "num_base_bdevs_operational": 2, 00:12:15.618 "base_bdevs_list": [ 00:12:15.618 { 00:12:15.618 "name": null, 00:12:15.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.618 "is_configured": false, 00:12:15.618 "data_offset": 0, 00:12:15.618 "data_size": 63488 00:12:15.618 }, 00:12:15.618 { 00:12:15.618 "name": "BaseBdev2", 00:12:15.618 "uuid": "c1072120-bf71-4e12-be4a-f88bf8b1dc40", 00:12:15.618 "is_configured": true, 00:12:15.618 "data_offset": 2048, 00:12:15.618 "data_size": 63488 00:12:15.618 }, 00:12:15.618 { 00:12:15.618 "name": "BaseBdev3", 00:12:15.618 "uuid": "7ee67279-f826-4e0a-974d-4d828d6bb890", 00:12:15.618 "is_configured": true, 00:12:15.618 "data_offset": 2048, 00:12:15.618 "data_size": 63488 00:12:15.618 } 00:12:15.618 ] 00:12:15.618 }' 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.618 06:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.188 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:16.188 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.188 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.188 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.189 [2024-11-26 06:22:00.110131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.189 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.189 [2024-11-26 06:22:00.276971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:16.189 [2024-11-26 06:22:00.277147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.448 BaseBdev2 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.448 [ 00:12:16.448 { 00:12:16.448 "name": "BaseBdev2", 00:12:16.448 "aliases": [ 00:12:16.448 "bb7de4b9-e65d-448f-b2a4-30e7449303fc" 00:12:16.448 ], 00:12:16.448 "product_name": "Malloc disk", 00:12:16.448 "block_size": 512, 00:12:16.448 "num_blocks": 65536, 00:12:16.448 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:16.448 "assigned_rate_limits": { 00:12:16.448 "rw_ios_per_sec": 0, 00:12:16.448 "rw_mbytes_per_sec": 0, 00:12:16.448 "r_mbytes_per_sec": 0, 00:12:16.448 "w_mbytes_per_sec": 0 00:12:16.448 }, 00:12:16.448 "claimed": false, 00:12:16.448 "zoned": false, 00:12:16.448 "supported_io_types": { 00:12:16.448 "read": true, 00:12:16.448 "write": true, 00:12:16.448 "unmap": true, 00:12:16.448 "flush": true, 00:12:16.448 "reset": true, 00:12:16.448 "nvme_admin": false, 00:12:16.448 "nvme_io": false, 00:12:16.448 "nvme_io_md": false, 00:12:16.448 "write_zeroes": true, 00:12:16.448 "zcopy": true, 00:12:16.448 "get_zone_info": false, 00:12:16.448 "zone_management": false, 00:12:16.448 "zone_append": false, 00:12:16.448 "compare": false, 00:12:16.448 "compare_and_write": false, 00:12:16.448 "abort": true, 00:12:16.448 "seek_hole": false, 00:12:16.448 "seek_data": false, 00:12:16.448 "copy": true, 00:12:16.448 "nvme_iov_md": false 00:12:16.448 }, 00:12:16.448 "memory_domains": [ 00:12:16.448 { 00:12:16.448 "dma_device_id": "system", 00:12:16.448 "dma_device_type": 1 00:12:16.448 }, 00:12:16.448 { 00:12:16.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.448 "dma_device_type": 2 00:12:16.448 } 00:12:16.448 ], 00:12:16.448 "driver_specific": {} 00:12:16.448 } 00:12:16.448 ] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.448 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.449 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:16.449 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.449 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:16.449 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.449 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 BaseBdev3 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 [ 00:12:16.707 { 00:12:16.707 "name": "BaseBdev3", 00:12:16.707 "aliases": [ 00:12:16.707 "527b8e5a-b760-442f-8246-453823784fef" 00:12:16.707 ], 00:12:16.707 "product_name": "Malloc disk", 00:12:16.707 "block_size": 512, 00:12:16.707 "num_blocks": 65536, 00:12:16.707 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:16.707 "assigned_rate_limits": { 00:12:16.707 "rw_ios_per_sec": 0, 00:12:16.707 "rw_mbytes_per_sec": 0, 00:12:16.707 "r_mbytes_per_sec": 0, 00:12:16.707 "w_mbytes_per_sec": 0 00:12:16.707 }, 00:12:16.707 "claimed": false, 00:12:16.707 "zoned": false, 00:12:16.707 "supported_io_types": { 00:12:16.707 "read": true, 00:12:16.707 "write": true, 00:12:16.707 "unmap": true, 00:12:16.707 "flush": true, 00:12:16.707 "reset": true, 00:12:16.707 "nvme_admin": false, 00:12:16.707 "nvme_io": false, 00:12:16.707 "nvme_io_md": false, 00:12:16.707 "write_zeroes": true, 00:12:16.707 "zcopy": true, 00:12:16.707 "get_zone_info": false, 00:12:16.707 "zone_management": false, 00:12:16.707 "zone_append": false, 00:12:16.707 "compare": false, 00:12:16.707 "compare_and_write": false, 00:12:16.707 "abort": true, 00:12:16.707 "seek_hole": false, 00:12:16.707 "seek_data": false, 00:12:16.707 "copy": true, 00:12:16.707 "nvme_iov_md": false 00:12:16.707 }, 00:12:16.707 "memory_domains": [ 00:12:16.707 { 00:12:16.707 "dma_device_id": "system", 00:12:16.707 "dma_device_type": 1 00:12:16.707 }, 00:12:16.707 { 00:12:16.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.707 "dma_device_type": 2 00:12:16.707 } 00:12:16.707 ], 00:12:16.707 "driver_specific": {} 00:12:16.707 } 00:12:16.707 ] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 [2024-11-26 06:22:00.642450] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.707 [2024-11-26 06:22:00.642586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.707 [2024-11-26 06:22:00.642641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.707 [2024-11-26 06:22:00.645216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.707 "name": "Existed_Raid", 00:12:16.707 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:16.707 "strip_size_kb": 64, 00:12:16.707 "state": "configuring", 00:12:16.707 "raid_level": "concat", 00:12:16.707 "superblock": true, 00:12:16.707 "num_base_bdevs": 3, 00:12:16.707 "num_base_bdevs_discovered": 2, 00:12:16.707 "num_base_bdevs_operational": 3, 00:12:16.707 "base_bdevs_list": [ 00:12:16.707 { 00:12:16.707 "name": "BaseBdev1", 00:12:16.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.707 "is_configured": false, 00:12:16.707 "data_offset": 0, 00:12:16.707 "data_size": 0 00:12:16.707 }, 00:12:16.707 { 00:12:16.707 "name": "BaseBdev2", 00:12:16.707 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:16.707 "is_configured": true, 00:12:16.707 "data_offset": 2048, 00:12:16.707 "data_size": 63488 00:12:16.707 }, 00:12:16.707 { 00:12:16.707 "name": "BaseBdev3", 00:12:16.707 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:16.707 "is_configured": true, 00:12:16.707 "data_offset": 2048, 00:12:16.707 "data_size": 63488 00:12:16.707 } 00:12:16.707 ] 00:12:16.707 }' 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.707 06:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 [2024-11-26 06:22:01.069673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.225 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.225 "name": "Existed_Raid", 00:12:17.225 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:17.225 "strip_size_kb": 64, 00:12:17.225 "state": "configuring", 00:12:17.225 "raid_level": "concat", 00:12:17.225 "superblock": true, 00:12:17.225 "num_base_bdevs": 3, 00:12:17.225 "num_base_bdevs_discovered": 1, 00:12:17.225 "num_base_bdevs_operational": 3, 00:12:17.225 "base_bdevs_list": [ 00:12:17.225 { 00:12:17.225 "name": "BaseBdev1", 00:12:17.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.225 "is_configured": false, 00:12:17.225 "data_offset": 0, 00:12:17.225 "data_size": 0 00:12:17.225 }, 00:12:17.225 { 00:12:17.225 "name": null, 00:12:17.225 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:17.225 "is_configured": false, 00:12:17.225 "data_offset": 0, 00:12:17.225 "data_size": 63488 00:12:17.225 }, 00:12:17.225 { 00:12:17.225 "name": "BaseBdev3", 00:12:17.225 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:17.225 "is_configured": true, 00:12:17.225 "data_offset": 2048, 00:12:17.225 "data_size": 63488 00:12:17.225 } 00:12:17.225 ] 00:12:17.225 }' 00:12:17.225 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.225 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.484 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 [2024-11-26 06:22:01.624991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.745 BaseBdev1 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.745 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 [ 00:12:17.745 { 00:12:17.745 "name": "BaseBdev1", 00:12:17.745 "aliases": [ 00:12:17.745 "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12" 00:12:17.745 ], 00:12:17.745 "product_name": "Malloc disk", 00:12:17.745 "block_size": 512, 00:12:17.745 "num_blocks": 65536, 00:12:17.745 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:17.745 "assigned_rate_limits": { 00:12:17.745 "rw_ios_per_sec": 0, 00:12:17.745 "rw_mbytes_per_sec": 0, 00:12:17.745 "r_mbytes_per_sec": 0, 00:12:17.745 "w_mbytes_per_sec": 0 00:12:17.745 }, 00:12:17.745 "claimed": true, 00:12:17.745 "claim_type": "exclusive_write", 00:12:17.745 "zoned": false, 00:12:17.745 "supported_io_types": { 00:12:17.745 "read": true, 00:12:17.745 "write": true, 00:12:17.745 "unmap": true, 00:12:17.745 "flush": true, 00:12:17.745 "reset": true, 00:12:17.745 "nvme_admin": false, 00:12:17.745 "nvme_io": false, 00:12:17.745 "nvme_io_md": false, 00:12:17.745 "write_zeroes": true, 00:12:17.745 "zcopy": true, 00:12:17.745 "get_zone_info": false, 00:12:17.745 "zone_management": false, 00:12:17.745 "zone_append": false, 00:12:17.745 "compare": false, 00:12:17.745 "compare_and_write": false, 00:12:17.745 "abort": true, 00:12:17.745 "seek_hole": false, 00:12:17.745 "seek_data": false, 00:12:17.745 "copy": true, 00:12:17.745 "nvme_iov_md": false 00:12:17.745 }, 00:12:17.745 "memory_domains": [ 00:12:17.745 { 00:12:17.745 "dma_device_id": "system", 00:12:17.745 "dma_device_type": 1 00:12:17.745 }, 00:12:17.746 { 00:12:17.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.746 "dma_device_type": 2 00:12:17.746 } 00:12:17.746 ], 00:12:17.746 "driver_specific": {} 00:12:17.746 } 00:12:17.746 ] 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.746 "name": "Existed_Raid", 00:12:17.746 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:17.746 "strip_size_kb": 64, 00:12:17.746 "state": "configuring", 00:12:17.746 "raid_level": "concat", 00:12:17.746 "superblock": true, 00:12:17.746 "num_base_bdevs": 3, 00:12:17.746 "num_base_bdevs_discovered": 2, 00:12:17.746 "num_base_bdevs_operational": 3, 00:12:17.746 "base_bdevs_list": [ 00:12:17.746 { 00:12:17.746 "name": "BaseBdev1", 00:12:17.746 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:17.746 "is_configured": true, 00:12:17.746 "data_offset": 2048, 00:12:17.746 "data_size": 63488 00:12:17.746 }, 00:12:17.746 { 00:12:17.746 "name": null, 00:12:17.746 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:17.746 "is_configured": false, 00:12:17.746 "data_offset": 0, 00:12:17.746 "data_size": 63488 00:12:17.746 }, 00:12:17.746 { 00:12:17.746 "name": "BaseBdev3", 00:12:17.746 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:17.746 "is_configured": true, 00:12:17.746 "data_offset": 2048, 00:12:17.746 "data_size": 63488 00:12:17.746 } 00:12:17.746 ] 00:12:17.746 }' 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.746 06:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.038 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.038 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.038 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.038 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.309 [2024-11-26 06:22:02.196186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.309 "name": "Existed_Raid", 00:12:18.309 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:18.309 "strip_size_kb": 64, 00:12:18.309 "state": "configuring", 00:12:18.309 "raid_level": "concat", 00:12:18.309 "superblock": true, 00:12:18.309 "num_base_bdevs": 3, 00:12:18.309 "num_base_bdevs_discovered": 1, 00:12:18.309 "num_base_bdevs_operational": 3, 00:12:18.309 "base_bdevs_list": [ 00:12:18.309 { 00:12:18.309 "name": "BaseBdev1", 00:12:18.309 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:18.309 "is_configured": true, 00:12:18.309 "data_offset": 2048, 00:12:18.309 "data_size": 63488 00:12:18.309 }, 00:12:18.309 { 00:12:18.309 "name": null, 00:12:18.309 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:18.309 "is_configured": false, 00:12:18.309 "data_offset": 0, 00:12:18.309 "data_size": 63488 00:12:18.309 }, 00:12:18.309 { 00:12:18.309 "name": null, 00:12:18.309 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:18.309 "is_configured": false, 00:12:18.309 "data_offset": 0, 00:12:18.309 "data_size": 63488 00:12:18.309 } 00:12:18.309 ] 00:12:18.309 }' 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.309 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.569 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.569 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.569 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.569 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.828 [2024-11-26 06:22:02.743376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:18.828 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.829 "name": "Existed_Raid", 00:12:18.829 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:18.829 "strip_size_kb": 64, 00:12:18.829 "state": "configuring", 00:12:18.829 "raid_level": "concat", 00:12:18.829 "superblock": true, 00:12:18.829 "num_base_bdevs": 3, 00:12:18.829 "num_base_bdevs_discovered": 2, 00:12:18.829 "num_base_bdevs_operational": 3, 00:12:18.829 "base_bdevs_list": [ 00:12:18.829 { 00:12:18.829 "name": "BaseBdev1", 00:12:18.829 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:18.829 "is_configured": true, 00:12:18.829 "data_offset": 2048, 00:12:18.829 "data_size": 63488 00:12:18.829 }, 00:12:18.829 { 00:12:18.829 "name": null, 00:12:18.829 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:18.829 "is_configured": false, 00:12:18.829 "data_offset": 0, 00:12:18.829 "data_size": 63488 00:12:18.829 }, 00:12:18.829 { 00:12:18.829 "name": "BaseBdev3", 00:12:18.829 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:18.829 "is_configured": true, 00:12:18.829 "data_offset": 2048, 00:12:18.829 "data_size": 63488 00:12:18.829 } 00:12:18.829 ] 00:12:18.829 }' 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.829 06:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.087 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.346 [2024-11-26 06:22:03.222615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.346 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.346 "name": "Existed_Raid", 00:12:19.346 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:19.346 "strip_size_kb": 64, 00:12:19.346 "state": "configuring", 00:12:19.346 "raid_level": "concat", 00:12:19.347 "superblock": true, 00:12:19.347 "num_base_bdevs": 3, 00:12:19.347 "num_base_bdevs_discovered": 1, 00:12:19.347 "num_base_bdevs_operational": 3, 00:12:19.347 "base_bdevs_list": [ 00:12:19.347 { 00:12:19.347 "name": null, 00:12:19.347 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:19.347 "is_configured": false, 00:12:19.347 "data_offset": 0, 00:12:19.347 "data_size": 63488 00:12:19.347 }, 00:12:19.347 { 00:12:19.347 "name": null, 00:12:19.347 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:19.347 "is_configured": false, 00:12:19.347 "data_offset": 0, 00:12:19.347 "data_size": 63488 00:12:19.347 }, 00:12:19.347 { 00:12:19.347 "name": "BaseBdev3", 00:12:19.347 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:19.347 "is_configured": true, 00:12:19.347 "data_offset": 2048, 00:12:19.347 "data_size": 63488 00:12:19.347 } 00:12:19.347 ] 00:12:19.347 }' 00:12:19.347 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.347 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.916 [2024-11-26 06:22:03.855049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.916 "name": "Existed_Raid", 00:12:19.916 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:19.916 "strip_size_kb": 64, 00:12:19.916 "state": "configuring", 00:12:19.916 "raid_level": "concat", 00:12:19.916 "superblock": true, 00:12:19.916 "num_base_bdevs": 3, 00:12:19.916 "num_base_bdevs_discovered": 2, 00:12:19.916 "num_base_bdevs_operational": 3, 00:12:19.916 "base_bdevs_list": [ 00:12:19.916 { 00:12:19.916 "name": null, 00:12:19.916 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:19.916 "is_configured": false, 00:12:19.916 "data_offset": 0, 00:12:19.916 "data_size": 63488 00:12:19.916 }, 00:12:19.916 { 00:12:19.916 "name": "BaseBdev2", 00:12:19.916 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:19.916 "is_configured": true, 00:12:19.916 "data_offset": 2048, 00:12:19.916 "data_size": 63488 00:12:19.916 }, 00:12:19.916 { 00:12:19.916 "name": "BaseBdev3", 00:12:19.916 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:19.916 "is_configured": true, 00:12:19.916 "data_offset": 2048, 00:12:19.916 "data_size": 63488 00:12:19.916 } 00:12:19.916 ] 00:12:19.916 }' 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.916 06:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.486 [2024-11-26 06:22:04.462585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:20.486 [2024-11-26 06:22:04.462913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:20.486 [2024-11-26 06:22:04.462933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:20.486 [2024-11-26 06:22:04.463344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:20.486 NewBaseBdev 00:12:20.486 [2024-11-26 06:22:04.463559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:20.486 [2024-11-26 06:22:04.463596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:20.486 [2024-11-26 06:22:04.463798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:20.486 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.487 [ 00:12:20.487 { 00:12:20.487 "name": "NewBaseBdev", 00:12:20.487 "aliases": [ 00:12:20.487 "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12" 00:12:20.487 ], 00:12:20.487 "product_name": "Malloc disk", 00:12:20.487 "block_size": 512, 00:12:20.487 "num_blocks": 65536, 00:12:20.487 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:20.487 "assigned_rate_limits": { 00:12:20.487 "rw_ios_per_sec": 0, 00:12:20.487 "rw_mbytes_per_sec": 0, 00:12:20.487 "r_mbytes_per_sec": 0, 00:12:20.487 "w_mbytes_per_sec": 0 00:12:20.487 }, 00:12:20.487 "claimed": true, 00:12:20.487 "claim_type": "exclusive_write", 00:12:20.487 "zoned": false, 00:12:20.487 "supported_io_types": { 00:12:20.487 "read": true, 00:12:20.487 "write": true, 00:12:20.487 "unmap": true, 00:12:20.487 "flush": true, 00:12:20.487 "reset": true, 00:12:20.487 "nvme_admin": false, 00:12:20.487 "nvme_io": false, 00:12:20.487 "nvme_io_md": false, 00:12:20.487 "write_zeroes": true, 00:12:20.487 "zcopy": true, 00:12:20.487 "get_zone_info": false, 00:12:20.487 "zone_management": false, 00:12:20.487 "zone_append": false, 00:12:20.487 "compare": false, 00:12:20.487 "compare_and_write": false, 00:12:20.487 "abort": true, 00:12:20.487 "seek_hole": false, 00:12:20.487 "seek_data": false, 00:12:20.487 "copy": true, 00:12:20.487 "nvme_iov_md": false 00:12:20.487 }, 00:12:20.487 "memory_domains": [ 00:12:20.487 { 00:12:20.487 "dma_device_id": "system", 00:12:20.487 "dma_device_type": 1 00:12:20.487 }, 00:12:20.487 { 00:12:20.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.487 "dma_device_type": 2 00:12:20.487 } 00:12:20.487 ], 00:12:20.487 "driver_specific": {} 00:12:20.487 } 00:12:20.487 ] 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.487 "name": "Existed_Raid", 00:12:20.487 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:20.487 "strip_size_kb": 64, 00:12:20.487 "state": "online", 00:12:20.487 "raid_level": "concat", 00:12:20.487 "superblock": true, 00:12:20.487 "num_base_bdevs": 3, 00:12:20.487 "num_base_bdevs_discovered": 3, 00:12:20.487 "num_base_bdevs_operational": 3, 00:12:20.487 "base_bdevs_list": [ 00:12:20.487 { 00:12:20.487 "name": "NewBaseBdev", 00:12:20.487 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:20.487 "is_configured": true, 00:12:20.487 "data_offset": 2048, 00:12:20.487 "data_size": 63488 00:12:20.487 }, 00:12:20.487 { 00:12:20.487 "name": "BaseBdev2", 00:12:20.487 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:20.487 "is_configured": true, 00:12:20.487 "data_offset": 2048, 00:12:20.487 "data_size": 63488 00:12:20.487 }, 00:12:20.487 { 00:12:20.487 "name": "BaseBdev3", 00:12:20.487 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:20.487 "is_configured": true, 00:12:20.487 "data_offset": 2048, 00:12:20.487 "data_size": 63488 00:12:20.487 } 00:12:20.487 ] 00:12:20.487 }' 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.487 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.057 [2024-11-26 06:22:04.918225] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.057 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.057 "name": "Existed_Raid", 00:12:21.057 "aliases": [ 00:12:21.057 "d2073cbb-2f74-4516-bb02-1d358033cb43" 00:12:21.057 ], 00:12:21.057 "product_name": "Raid Volume", 00:12:21.057 "block_size": 512, 00:12:21.057 "num_blocks": 190464, 00:12:21.057 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:21.057 "assigned_rate_limits": { 00:12:21.057 "rw_ios_per_sec": 0, 00:12:21.057 "rw_mbytes_per_sec": 0, 00:12:21.057 "r_mbytes_per_sec": 0, 00:12:21.057 "w_mbytes_per_sec": 0 00:12:21.057 }, 00:12:21.057 "claimed": false, 00:12:21.057 "zoned": false, 00:12:21.057 "supported_io_types": { 00:12:21.057 "read": true, 00:12:21.057 "write": true, 00:12:21.057 "unmap": true, 00:12:21.057 "flush": true, 00:12:21.057 "reset": true, 00:12:21.057 "nvme_admin": false, 00:12:21.057 "nvme_io": false, 00:12:21.057 "nvme_io_md": false, 00:12:21.057 "write_zeroes": true, 00:12:21.057 "zcopy": false, 00:12:21.057 "get_zone_info": false, 00:12:21.057 "zone_management": false, 00:12:21.057 "zone_append": false, 00:12:21.057 "compare": false, 00:12:21.057 "compare_and_write": false, 00:12:21.057 "abort": false, 00:12:21.057 "seek_hole": false, 00:12:21.057 "seek_data": false, 00:12:21.057 "copy": false, 00:12:21.057 "nvme_iov_md": false 00:12:21.057 }, 00:12:21.057 "memory_domains": [ 00:12:21.057 { 00:12:21.057 "dma_device_id": "system", 00:12:21.058 "dma_device_type": 1 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.058 "dma_device_type": 2 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "dma_device_id": "system", 00:12:21.058 "dma_device_type": 1 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.058 "dma_device_type": 2 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "dma_device_id": "system", 00:12:21.058 "dma_device_type": 1 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.058 "dma_device_type": 2 00:12:21.058 } 00:12:21.058 ], 00:12:21.058 "driver_specific": { 00:12:21.058 "raid": { 00:12:21.058 "uuid": "d2073cbb-2f74-4516-bb02-1d358033cb43", 00:12:21.058 "strip_size_kb": 64, 00:12:21.058 "state": "online", 00:12:21.058 "raid_level": "concat", 00:12:21.058 "superblock": true, 00:12:21.058 "num_base_bdevs": 3, 00:12:21.058 "num_base_bdevs_discovered": 3, 00:12:21.058 "num_base_bdevs_operational": 3, 00:12:21.058 "base_bdevs_list": [ 00:12:21.058 { 00:12:21.058 "name": "NewBaseBdev", 00:12:21.058 "uuid": "b6b926d4-e9d2-4643-bc0b-38cfa3dd3e12", 00:12:21.058 "is_configured": true, 00:12:21.058 "data_offset": 2048, 00:12:21.058 "data_size": 63488 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "name": "BaseBdev2", 00:12:21.058 "uuid": "bb7de4b9-e65d-448f-b2a4-30e7449303fc", 00:12:21.058 "is_configured": true, 00:12:21.058 "data_offset": 2048, 00:12:21.058 "data_size": 63488 00:12:21.058 }, 00:12:21.058 { 00:12:21.058 "name": "BaseBdev3", 00:12:21.058 "uuid": "527b8e5a-b760-442f-8246-453823784fef", 00:12:21.058 "is_configured": true, 00:12:21.058 "data_offset": 2048, 00:12:21.058 "data_size": 63488 00:12:21.058 } 00:12:21.058 ] 00:12:21.058 } 00:12:21.058 } 00:12:21.058 }' 00:12:21.058 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.058 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:21.058 BaseBdev2 00:12:21.058 BaseBdev3' 00:12:21.058 06:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.058 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.319 [2024-11-26 06:22:05.213391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.319 [2024-11-26 06:22:05.213430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.319 [2024-11-26 06:22:05.213553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.319 [2024-11-26 06:22:05.213625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.319 [2024-11-26 06:22:05.213641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66646 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66646 ']' 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66646 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66646 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.319 killing process with pid 66646 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66646' 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66646 00:12:21.319 [2024-11-26 06:22:05.261196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.319 06:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66646 00:12:21.578 [2024-11-26 06:22:05.598553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:22.957 06:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:22.957 00:12:22.957 real 0m11.238s 00:12:22.957 user 0m17.508s 00:12:22.957 sys 0m2.146s 00:12:22.957 06:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.957 06:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.957 ************************************ 00:12:22.957 END TEST raid_state_function_test_sb 00:12:22.957 ************************************ 00:12:22.957 06:22:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:22.957 06:22:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:22.957 06:22:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.957 06:22:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:22.957 ************************************ 00:12:22.957 START TEST raid_superblock_test 00:12:22.957 ************************************ 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67273 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67273 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67273 ']' 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.957 06:22:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.957 [2024-11-26 06:22:07.057500] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:22.957 [2024-11-26 06:22:07.057674] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67273 ] 00:12:23.217 [2024-11-26 06:22:07.240112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.479 [2024-11-26 06:22:07.390986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.739 [2024-11-26 06:22:07.647951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.739 [2024-11-26 06:22:07.648001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.998 malloc1 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.998 [2024-11-26 06:22:07.990475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.998 [2024-11-26 06:22:07.990551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.998 [2024-11-26 06:22:07.990578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:23.998 [2024-11-26 06:22:07.990588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.998 [2024-11-26 06:22:07.993334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.998 [2024-11-26 06:22:07.993382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.998 pt1 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.998 06:22:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.998 malloc2 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.998 [2024-11-26 06:22:08.062602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.998 [2024-11-26 06:22:08.062678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.998 [2024-11-26 06:22:08.062708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:23.998 [2024-11-26 06:22:08.062719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.998 [2024-11-26 06:22:08.065790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.998 [2024-11-26 06:22:08.065842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.998 pt2 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.998 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.257 malloc3 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.257 [2024-11-26 06:22:08.145367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.257 [2024-11-26 06:22:08.145436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.257 [2024-11-26 06:22:08.145461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:24.257 [2024-11-26 06:22:08.145471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.257 [2024-11-26 06:22:08.148216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.257 [2024-11-26 06:22:08.148256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.257 pt3 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.257 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.257 [2024-11-26 06:22:08.157422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:24.257 [2024-11-26 06:22:08.159711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.258 [2024-11-26 06:22:08.159781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.258 [2024-11-26 06:22:08.159948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:24.258 [2024-11-26 06:22:08.159986] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:24.258 [2024-11-26 06:22:08.160332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:24.258 [2024-11-26 06:22:08.160558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:24.258 [2024-11-26 06:22:08.160579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:24.258 [2024-11-26 06:22:08.160781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.258 "name": "raid_bdev1", 00:12:24.258 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:24.258 "strip_size_kb": 64, 00:12:24.258 "state": "online", 00:12:24.258 "raid_level": "concat", 00:12:24.258 "superblock": true, 00:12:24.258 "num_base_bdevs": 3, 00:12:24.258 "num_base_bdevs_discovered": 3, 00:12:24.258 "num_base_bdevs_operational": 3, 00:12:24.258 "base_bdevs_list": [ 00:12:24.258 { 00:12:24.258 "name": "pt1", 00:12:24.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.258 "is_configured": true, 00:12:24.258 "data_offset": 2048, 00:12:24.258 "data_size": 63488 00:12:24.258 }, 00:12:24.258 { 00:12:24.258 "name": "pt2", 00:12:24.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.258 "is_configured": true, 00:12:24.258 "data_offset": 2048, 00:12:24.258 "data_size": 63488 00:12:24.258 }, 00:12:24.258 { 00:12:24.258 "name": "pt3", 00:12:24.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.258 "is_configured": true, 00:12:24.258 "data_offset": 2048, 00:12:24.258 "data_size": 63488 00:12:24.258 } 00:12:24.258 ] 00:12:24.258 }' 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.258 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.517 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.775 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:24.775 [2024-11-26 06:22:08.652981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.775 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.775 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:24.775 "name": "raid_bdev1", 00:12:24.775 "aliases": [ 00:12:24.775 "b4ffc5cb-a070-4613-a810-c351776cc356" 00:12:24.775 ], 00:12:24.775 "product_name": "Raid Volume", 00:12:24.775 "block_size": 512, 00:12:24.775 "num_blocks": 190464, 00:12:24.775 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:24.775 "assigned_rate_limits": { 00:12:24.775 "rw_ios_per_sec": 0, 00:12:24.775 "rw_mbytes_per_sec": 0, 00:12:24.775 "r_mbytes_per_sec": 0, 00:12:24.775 "w_mbytes_per_sec": 0 00:12:24.775 }, 00:12:24.775 "claimed": false, 00:12:24.775 "zoned": false, 00:12:24.775 "supported_io_types": { 00:12:24.775 "read": true, 00:12:24.775 "write": true, 00:12:24.775 "unmap": true, 00:12:24.775 "flush": true, 00:12:24.775 "reset": true, 00:12:24.775 "nvme_admin": false, 00:12:24.775 "nvme_io": false, 00:12:24.775 "nvme_io_md": false, 00:12:24.775 "write_zeroes": true, 00:12:24.775 "zcopy": false, 00:12:24.775 "get_zone_info": false, 00:12:24.775 "zone_management": false, 00:12:24.775 "zone_append": false, 00:12:24.775 "compare": false, 00:12:24.775 "compare_and_write": false, 00:12:24.775 "abort": false, 00:12:24.775 "seek_hole": false, 00:12:24.775 "seek_data": false, 00:12:24.775 "copy": false, 00:12:24.775 "nvme_iov_md": false 00:12:24.775 }, 00:12:24.775 "memory_domains": [ 00:12:24.775 { 00:12:24.775 "dma_device_id": "system", 00:12:24.775 "dma_device_type": 1 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.775 "dma_device_type": 2 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "dma_device_id": "system", 00:12:24.775 "dma_device_type": 1 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.775 "dma_device_type": 2 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "dma_device_id": "system", 00:12:24.775 "dma_device_type": 1 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.775 "dma_device_type": 2 00:12:24.775 } 00:12:24.775 ], 00:12:24.775 "driver_specific": { 00:12:24.775 "raid": { 00:12:24.775 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:24.775 "strip_size_kb": 64, 00:12:24.775 "state": "online", 00:12:24.775 "raid_level": "concat", 00:12:24.775 "superblock": true, 00:12:24.775 "num_base_bdevs": 3, 00:12:24.775 "num_base_bdevs_discovered": 3, 00:12:24.775 "num_base_bdevs_operational": 3, 00:12:24.775 "base_bdevs_list": [ 00:12:24.775 { 00:12:24.775 "name": "pt1", 00:12:24.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.775 "is_configured": true, 00:12:24.775 "data_offset": 2048, 00:12:24.775 "data_size": 63488 00:12:24.775 }, 00:12:24.775 { 00:12:24.775 "name": "pt2", 00:12:24.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.775 "is_configured": true, 00:12:24.775 "data_offset": 2048, 00:12:24.775 "data_size": 63488 00:12:24.775 }, 00:12:24.776 { 00:12:24.776 "name": "pt3", 00:12:24.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.776 "is_configured": true, 00:12:24.776 "data_offset": 2048, 00:12:24.776 "data_size": 63488 00:12:24.776 } 00:12:24.776 ] 00:12:24.776 } 00:12:24.776 } 00:12:24.776 }' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:24.776 pt2 00:12:24.776 pt3' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.776 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 [2024-11-26 06:22:08.916522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b4ffc5cb-a070-4613-a810-c351776cc356 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b4ffc5cb-a070-4613-a810-c351776cc356 ']' 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 [2024-11-26 06:22:08.948098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.034 [2024-11-26 06:22:08.948137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.034 [2024-11-26 06:22:08.948261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.034 [2024-11-26 06:22:08.948352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.034 [2024-11-26 06:22:08.948385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 06:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.034 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.034 [2024-11-26 06:22:09.103953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:25.034 [2024-11-26 06:22:09.106540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:25.034 [2024-11-26 06:22:09.106610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:25.034 [2024-11-26 06:22:09.106671] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:25.035 [2024-11-26 06:22:09.106743] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:25.035 [2024-11-26 06:22:09.106767] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:25.035 [2024-11-26 06:22:09.106788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.035 [2024-11-26 06:22:09.106800] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:25.035 request: 00:12:25.035 { 00:12:25.035 "name": "raid_bdev1", 00:12:25.035 "raid_level": "concat", 00:12:25.035 "base_bdevs": [ 00:12:25.035 "malloc1", 00:12:25.035 "malloc2", 00:12:25.035 "malloc3" 00:12:25.035 ], 00:12:25.035 "strip_size_kb": 64, 00:12:25.035 "superblock": false, 00:12:25.035 "method": "bdev_raid_create", 00:12:25.035 "req_id": 1 00:12:25.035 } 00:12:25.035 Got JSON-RPC error response 00:12:25.035 response: 00:12:25.035 { 00:12:25.035 "code": -17, 00:12:25.035 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:25.035 } 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.035 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.291 [2024-11-26 06:22:09.187755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.291 [2024-11-26 06:22:09.187922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.291 [2024-11-26 06:22:09.187973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:25.291 [2024-11-26 06:22:09.188036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.291 [2024-11-26 06:22:09.191031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.291 [2024-11-26 06:22:09.191131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.291 [2024-11-26 06:22:09.191274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:25.291 [2024-11-26 06:22:09.191411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.291 pt1 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.291 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.291 "name": "raid_bdev1", 00:12:25.291 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:25.291 "strip_size_kb": 64, 00:12:25.291 "state": "configuring", 00:12:25.291 "raid_level": "concat", 00:12:25.291 "superblock": true, 00:12:25.291 "num_base_bdevs": 3, 00:12:25.291 "num_base_bdevs_discovered": 1, 00:12:25.291 "num_base_bdevs_operational": 3, 00:12:25.291 "base_bdevs_list": [ 00:12:25.291 { 00:12:25.291 "name": "pt1", 00:12:25.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.291 "is_configured": true, 00:12:25.291 "data_offset": 2048, 00:12:25.291 "data_size": 63488 00:12:25.291 }, 00:12:25.291 { 00:12:25.291 "name": null, 00:12:25.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.291 "is_configured": false, 00:12:25.291 "data_offset": 2048, 00:12:25.291 "data_size": 63488 00:12:25.292 }, 00:12:25.292 { 00:12:25.292 "name": null, 00:12:25.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.292 "is_configured": false, 00:12:25.292 "data_offset": 2048, 00:12:25.292 "data_size": 63488 00:12:25.292 } 00:12:25.292 ] 00:12:25.292 }' 00:12:25.292 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.292 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.548 [2024-11-26 06:22:09.650937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.548 [2024-11-26 06:22:09.651022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.548 [2024-11-26 06:22:09.651061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:25.548 [2024-11-26 06:22:09.651073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.548 [2024-11-26 06:22:09.651640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.548 [2024-11-26 06:22:09.651668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.548 [2024-11-26 06:22:09.651795] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:25.548 [2024-11-26 06:22:09.651822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.548 pt2 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.548 [2024-11-26 06:22:09.658917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.548 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.805 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.805 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.805 "name": "raid_bdev1", 00:12:25.805 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:25.805 "strip_size_kb": 64, 00:12:25.805 "state": "configuring", 00:12:25.805 "raid_level": "concat", 00:12:25.805 "superblock": true, 00:12:25.805 "num_base_bdevs": 3, 00:12:25.805 "num_base_bdevs_discovered": 1, 00:12:25.805 "num_base_bdevs_operational": 3, 00:12:25.805 "base_bdevs_list": [ 00:12:25.805 { 00:12:25.805 "name": "pt1", 00:12:25.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.805 "is_configured": true, 00:12:25.805 "data_offset": 2048, 00:12:25.805 "data_size": 63488 00:12:25.805 }, 00:12:25.805 { 00:12:25.805 "name": null, 00:12:25.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.805 "is_configured": false, 00:12:25.805 "data_offset": 0, 00:12:25.805 "data_size": 63488 00:12:25.805 }, 00:12:25.805 { 00:12:25.805 "name": null, 00:12:25.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.805 "is_configured": false, 00:12:25.805 "data_offset": 2048, 00:12:25.805 "data_size": 63488 00:12:25.805 } 00:12:25.805 ] 00:12:25.805 }' 00:12:25.805 06:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.805 06:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 [2024-11-26 06:22:10.126130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.064 [2024-11-26 06:22:10.126283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.064 [2024-11-26 06:22:10.126330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:26.064 [2024-11-26 06:22:10.126389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.064 [2024-11-26 06:22:10.127077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.064 [2024-11-26 06:22:10.127148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.064 [2024-11-26 06:22:10.127303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.064 [2024-11-26 06:22:10.127380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.064 pt2 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.064 [2024-11-26 06:22:10.138057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.064 [2024-11-26 06:22:10.138140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.064 [2024-11-26 06:22:10.138158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:26.064 [2024-11-26 06:22:10.138172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.064 [2024-11-26 06:22:10.138677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.064 [2024-11-26 06:22:10.138712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.064 [2024-11-26 06:22:10.138794] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:26.064 [2024-11-26 06:22:10.138822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.064 [2024-11-26 06:22:10.138979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:26.064 [2024-11-26 06:22:10.138994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:26.064 [2024-11-26 06:22:10.139357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:26.064 [2024-11-26 06:22:10.139545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:26.064 [2024-11-26 06:22:10.139557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:26.064 [2024-11-26 06:22:10.139736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.064 pt3 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.064 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.065 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.323 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.323 "name": "raid_bdev1", 00:12:26.323 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:26.323 "strip_size_kb": 64, 00:12:26.323 "state": "online", 00:12:26.323 "raid_level": "concat", 00:12:26.323 "superblock": true, 00:12:26.323 "num_base_bdevs": 3, 00:12:26.323 "num_base_bdevs_discovered": 3, 00:12:26.323 "num_base_bdevs_operational": 3, 00:12:26.323 "base_bdevs_list": [ 00:12:26.323 { 00:12:26.323 "name": "pt1", 00:12:26.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.323 "is_configured": true, 00:12:26.323 "data_offset": 2048, 00:12:26.323 "data_size": 63488 00:12:26.323 }, 00:12:26.323 { 00:12:26.323 "name": "pt2", 00:12:26.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.323 "is_configured": true, 00:12:26.323 "data_offset": 2048, 00:12:26.323 "data_size": 63488 00:12:26.323 }, 00:12:26.323 { 00:12:26.323 "name": "pt3", 00:12:26.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.323 "is_configured": true, 00:12:26.323 "data_offset": 2048, 00:12:26.323 "data_size": 63488 00:12:26.323 } 00:12:26.323 ] 00:12:26.323 }' 00:12:26.323 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.323 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.582 [2024-11-26 06:22:10.661543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.582 "name": "raid_bdev1", 00:12:26.582 "aliases": [ 00:12:26.582 "b4ffc5cb-a070-4613-a810-c351776cc356" 00:12:26.582 ], 00:12:26.582 "product_name": "Raid Volume", 00:12:26.582 "block_size": 512, 00:12:26.582 "num_blocks": 190464, 00:12:26.582 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:26.582 "assigned_rate_limits": { 00:12:26.582 "rw_ios_per_sec": 0, 00:12:26.582 "rw_mbytes_per_sec": 0, 00:12:26.582 "r_mbytes_per_sec": 0, 00:12:26.582 "w_mbytes_per_sec": 0 00:12:26.582 }, 00:12:26.582 "claimed": false, 00:12:26.582 "zoned": false, 00:12:26.582 "supported_io_types": { 00:12:26.582 "read": true, 00:12:26.582 "write": true, 00:12:26.582 "unmap": true, 00:12:26.582 "flush": true, 00:12:26.582 "reset": true, 00:12:26.582 "nvme_admin": false, 00:12:26.582 "nvme_io": false, 00:12:26.582 "nvme_io_md": false, 00:12:26.582 "write_zeroes": true, 00:12:26.582 "zcopy": false, 00:12:26.582 "get_zone_info": false, 00:12:26.582 "zone_management": false, 00:12:26.582 "zone_append": false, 00:12:26.582 "compare": false, 00:12:26.582 "compare_and_write": false, 00:12:26.582 "abort": false, 00:12:26.582 "seek_hole": false, 00:12:26.582 "seek_data": false, 00:12:26.582 "copy": false, 00:12:26.582 "nvme_iov_md": false 00:12:26.582 }, 00:12:26.582 "memory_domains": [ 00:12:26.582 { 00:12:26.582 "dma_device_id": "system", 00:12:26.582 "dma_device_type": 1 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.582 "dma_device_type": 2 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "dma_device_id": "system", 00:12:26.582 "dma_device_type": 1 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.582 "dma_device_type": 2 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "dma_device_id": "system", 00:12:26.582 "dma_device_type": 1 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.582 "dma_device_type": 2 00:12:26.582 } 00:12:26.582 ], 00:12:26.582 "driver_specific": { 00:12:26.582 "raid": { 00:12:26.582 "uuid": "b4ffc5cb-a070-4613-a810-c351776cc356", 00:12:26.582 "strip_size_kb": 64, 00:12:26.582 "state": "online", 00:12:26.582 "raid_level": "concat", 00:12:26.582 "superblock": true, 00:12:26.582 "num_base_bdevs": 3, 00:12:26.582 "num_base_bdevs_discovered": 3, 00:12:26.582 "num_base_bdevs_operational": 3, 00:12:26.582 "base_bdevs_list": [ 00:12:26.582 { 00:12:26.582 "name": "pt1", 00:12:26.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.582 "is_configured": true, 00:12:26.582 "data_offset": 2048, 00:12:26.582 "data_size": 63488 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "name": "pt2", 00:12:26.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.582 "is_configured": true, 00:12:26.582 "data_offset": 2048, 00:12:26.582 "data_size": 63488 00:12:26.582 }, 00:12:26.582 { 00:12:26.582 "name": "pt3", 00:12:26.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.582 "is_configured": true, 00:12:26.582 "data_offset": 2048, 00:12:26.582 "data_size": 63488 00:12:26.582 } 00:12:26.582 ] 00:12:26.582 } 00:12:26.582 } 00:12:26.582 }' 00:12:26.582 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:26.840 pt2 00:12:26.840 pt3' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:26.840 [2024-11-26 06:22:10.933100] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.840 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b4ffc5cb-a070-4613-a810-c351776cc356 '!=' b4ffc5cb-a070-4613-a810-c351776cc356 ']' 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67273 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67273 ']' 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67273 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.098 06:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67273 00:12:27.098 killing process with pid 67273 00:12:27.098 06:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.098 06:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.098 06:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67273' 00:12:27.098 06:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67273 00:12:27.098 [2024-11-26 06:22:11.011246] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.098 [2024-11-26 06:22:11.011370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.098 06:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67273 00:12:27.098 [2024-11-26 06:22:11.011443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.098 [2024-11-26 06:22:11.011456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:27.356 [2024-11-26 06:22:11.369233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.733 06:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:28.733 00:12:28.733 real 0m5.718s 00:12:28.733 user 0m7.994s 00:12:28.733 sys 0m1.067s 00:12:28.733 06:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.733 06:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.733 ************************************ 00:12:28.733 END TEST raid_superblock_test 00:12:28.733 ************************************ 00:12:28.733 06:22:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:12:28.733 06:22:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.733 06:22:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.733 06:22:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.733 ************************************ 00:12:28.733 START TEST raid_read_error_test 00:12:28.733 ************************************ 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e4RrctIUZK 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67532 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67532 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67532 ']' 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.733 06:22:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.733 [2024-11-26 06:22:12.857998] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:28.733 [2024-11-26 06:22:12.858259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67532 ] 00:12:28.993 [2024-11-26 06:22:13.041416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.253 [2024-11-26 06:22:13.192316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.512 [2024-11-26 06:22:13.450393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.512 [2024-11-26 06:22:13.450582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 BaseBdev1_malloc 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 true 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 [2024-11-26 06:22:13.785089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:29.771 [2024-11-26 06:22:13.785150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.771 [2024-11-26 06:22:13.785173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:29.771 [2024-11-26 06:22:13.785186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.771 [2024-11-26 06:22:13.787741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.771 [2024-11-26 06:22:13.787784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.771 BaseBdev1 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 BaseBdev2_malloc 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 true 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.772 [2024-11-26 06:22:13.862287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:29.772 [2024-11-26 06:22:13.862356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.772 [2024-11-26 06:22:13.862377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:29.772 [2024-11-26 06:22:13.862389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.772 [2024-11-26 06:22:13.865230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.772 [2024-11-26 06:22:13.865273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.772 BaseBdev2 00:12:29.772 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.772 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.772 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.772 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.772 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 BaseBdev3_malloc 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 true 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 [2024-11-26 06:22:13.946206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:30.030 [2024-11-26 06:22:13.946275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.030 [2024-11-26 06:22:13.946296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:30.030 [2024-11-26 06:22:13.946309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.030 [2024-11-26 06:22:13.949016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.030 [2024-11-26 06:22:13.949081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:30.030 BaseBdev3 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.030 [2024-11-26 06:22:13.958361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.030 [2024-11-26 06:22:13.960812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.030 [2024-11-26 06:22:13.960911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.030 [2024-11-26 06:22:13.961160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.030 [2024-11-26 06:22:13.961174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:30.030 [2024-11-26 06:22:13.961512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:30.030 [2024-11-26 06:22:13.961706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.030 [2024-11-26 06:22:13.961721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:30.030 [2024-11-26 06:22:13.961921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:30.030 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.031 06:22:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.031 06:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.031 "name": "raid_bdev1", 00:12:30.031 "uuid": "969fa5a6-a8b0-4cbb-b7f5-82718dd41854", 00:12:30.031 "strip_size_kb": 64, 00:12:30.031 "state": "online", 00:12:30.031 "raid_level": "concat", 00:12:30.031 "superblock": true, 00:12:30.031 "num_base_bdevs": 3, 00:12:30.031 "num_base_bdevs_discovered": 3, 00:12:30.031 "num_base_bdevs_operational": 3, 00:12:30.031 "base_bdevs_list": [ 00:12:30.031 { 00:12:30.031 "name": "BaseBdev1", 00:12:30.031 "uuid": "55c1f66a-fe4e-5f5d-b635-1f37634dbfe8", 00:12:30.031 "is_configured": true, 00:12:30.031 "data_offset": 2048, 00:12:30.031 "data_size": 63488 00:12:30.031 }, 00:12:30.031 { 00:12:30.031 "name": "BaseBdev2", 00:12:30.031 "uuid": "54a788fb-4b77-5b93-9ecd-cc08280e030d", 00:12:30.031 "is_configured": true, 00:12:30.031 "data_offset": 2048, 00:12:30.031 "data_size": 63488 00:12:30.031 }, 00:12:30.031 { 00:12:30.031 "name": "BaseBdev3", 00:12:30.031 "uuid": "2caa933f-6190-5ad5-8cf7-7b0340d8aae2", 00:12:30.031 "is_configured": true, 00:12:30.031 "data_offset": 2048, 00:12:30.031 "data_size": 63488 00:12:30.031 } 00:12:30.031 ] 00:12:30.031 }' 00:12:30.031 06:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.031 06:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.600 06:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:30.600 06:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:30.600 [2024-11-26 06:22:14.535142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.539 "name": "raid_bdev1", 00:12:31.539 "uuid": "969fa5a6-a8b0-4cbb-b7f5-82718dd41854", 00:12:31.539 "strip_size_kb": 64, 00:12:31.539 "state": "online", 00:12:31.539 "raid_level": "concat", 00:12:31.539 "superblock": true, 00:12:31.539 "num_base_bdevs": 3, 00:12:31.539 "num_base_bdevs_discovered": 3, 00:12:31.539 "num_base_bdevs_operational": 3, 00:12:31.539 "base_bdevs_list": [ 00:12:31.539 { 00:12:31.539 "name": "BaseBdev1", 00:12:31.539 "uuid": "55c1f66a-fe4e-5f5d-b635-1f37634dbfe8", 00:12:31.539 "is_configured": true, 00:12:31.539 "data_offset": 2048, 00:12:31.539 "data_size": 63488 00:12:31.539 }, 00:12:31.539 { 00:12:31.539 "name": "BaseBdev2", 00:12:31.539 "uuid": "54a788fb-4b77-5b93-9ecd-cc08280e030d", 00:12:31.539 "is_configured": true, 00:12:31.539 "data_offset": 2048, 00:12:31.539 "data_size": 63488 00:12:31.539 }, 00:12:31.539 { 00:12:31.539 "name": "BaseBdev3", 00:12:31.539 "uuid": "2caa933f-6190-5ad5-8cf7-7b0340d8aae2", 00:12:31.539 "is_configured": true, 00:12:31.539 "data_offset": 2048, 00:12:31.539 "data_size": 63488 00:12:31.539 } 00:12:31.539 ] 00:12:31.539 }' 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.539 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.136 [2024-11-26 06:22:15.953728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.136 [2024-11-26 06:22:15.953835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.136 [2024-11-26 06:22:15.956712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.136 [2024-11-26 06:22:15.956808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.136 [2024-11-26 06:22:15.956933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.136 [2024-11-26 06:22:15.956984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:32.136 { 00:12:32.136 "results": [ 00:12:32.136 { 00:12:32.136 "job": "raid_bdev1", 00:12:32.136 "core_mask": "0x1", 00:12:32.136 "workload": "randrw", 00:12:32.136 "percentage": 50, 00:12:32.136 "status": "finished", 00:12:32.136 "queue_depth": 1, 00:12:32.136 "io_size": 131072, 00:12:32.136 "runtime": 1.418878, 00:12:32.136 "iops": 12275.192088396607, 00:12:32.136 "mibps": 1534.3990110495758, 00:12:32.136 "io_failed": 1, 00:12:32.136 "io_timeout": 0, 00:12:32.136 "avg_latency_us": 114.62087275072066, 00:12:32.136 "min_latency_us": 28.05938864628821, 00:12:32.136 "max_latency_us": 1466.6899563318777 00:12:32.136 } 00:12:32.136 ], 00:12:32.136 "core_count": 1 00:12:32.136 } 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67532 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67532 ']' 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67532 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67532 00:12:32.136 killing process with pid 67532 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67532' 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67532 00:12:32.136 [2024-11-26 06:22:15.999797] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.136 06:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67532 00:12:32.396 [2024-11-26 06:22:16.269623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e4RrctIUZK 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:33.778 ************************************ 00:12:33.778 END TEST raid_read_error_test 00:12:33.778 ************************************ 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:12:33.778 00:12:33.778 real 0m4.897s 00:12:33.778 user 0m5.728s 00:12:33.778 sys 0m0.690s 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.778 06:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.778 06:22:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:12:33.778 06:22:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:33.778 06:22:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.778 06:22:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.778 ************************************ 00:12:33.778 START TEST raid_write_error_test 00:12:33.778 ************************************ 00:12:33.778 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:12:33.778 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:33.778 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:33.778 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:33.778 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xSqpyqG7uv 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67683 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67683 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67683 ']' 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.779 06:22:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.779 [2024-11-26 06:22:17.813201] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:33.779 [2024-11-26 06:22:17.813484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67683 ] 00:12:34.039 [2024-11-26 06:22:17.997656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.039 [2024-11-26 06:22:18.151032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.299 [2024-11-26 06:22:18.409435] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.299 [2024-11-26 06:22:18.409522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 BaseBdev1_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 true 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 [2024-11-26 06:22:18.790846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:34.869 [2024-11-26 06:22:18.790917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.869 [2024-11-26 06:22:18.790940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:34.869 [2024-11-26 06:22:18.790952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.869 [2024-11-26 06:22:18.793713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.869 [2024-11-26 06:22:18.793756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:34.869 BaseBdev1 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 BaseBdev2_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 true 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 [2024-11-26 06:22:18.857943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:34.869 [2024-11-26 06:22:18.858010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.869 [2024-11-26 06:22:18.858030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:34.869 [2024-11-26 06:22:18.858042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.869 [2024-11-26 06:22:18.860833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.869 [2024-11-26 06:22:18.860889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.869 BaseBdev2 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 BaseBdev3_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 true 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.869 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.869 [2024-11-26 06:22:18.939327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:34.869 [2024-11-26 06:22:18.939389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.870 [2024-11-26 06:22:18.939411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:34.870 [2024-11-26 06:22:18.939422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.870 [2024-11-26 06:22:18.942236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.870 [2024-11-26 06:22:18.942279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:34.870 BaseBdev3 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.870 [2024-11-26 06:22:18.947408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.870 [2024-11-26 06:22:18.949920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.870 [2024-11-26 06:22:18.950017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.870 [2024-11-26 06:22:18.950273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:34.870 [2024-11-26 06:22:18.950289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:34.870 [2024-11-26 06:22:18.950593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:34.870 [2024-11-26 06:22:18.950782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:34.870 [2024-11-26 06:22:18.950800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:34.870 [2024-11-26 06:22:18.950992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.870 06:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.129 06:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.129 "name": "raid_bdev1", 00:12:35.129 "uuid": "d6e37360-389d-494a-995a-cb1c60b745e0", 00:12:35.129 "strip_size_kb": 64, 00:12:35.129 "state": "online", 00:12:35.129 "raid_level": "concat", 00:12:35.129 "superblock": true, 00:12:35.129 "num_base_bdevs": 3, 00:12:35.129 "num_base_bdevs_discovered": 3, 00:12:35.129 "num_base_bdevs_operational": 3, 00:12:35.129 "base_bdevs_list": [ 00:12:35.129 { 00:12:35.129 "name": "BaseBdev1", 00:12:35.129 "uuid": "f9f9d89f-7beb-596a-bf74-1751b670f63b", 00:12:35.129 "is_configured": true, 00:12:35.129 "data_offset": 2048, 00:12:35.129 "data_size": 63488 00:12:35.129 }, 00:12:35.129 { 00:12:35.129 "name": "BaseBdev2", 00:12:35.129 "uuid": "3430e764-97ff-52dd-9863-5be80c9e9fe2", 00:12:35.129 "is_configured": true, 00:12:35.129 "data_offset": 2048, 00:12:35.129 "data_size": 63488 00:12:35.129 }, 00:12:35.129 { 00:12:35.129 "name": "BaseBdev3", 00:12:35.129 "uuid": "68793087-e73e-5a39-b2d2-a430e5781f61", 00:12:35.129 "is_configured": true, 00:12:35.129 "data_offset": 2048, 00:12:35.129 "data_size": 63488 00:12:35.129 } 00:12:35.129 ] 00:12:35.129 }' 00:12:35.129 06:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.129 06:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.389 06:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:35.389 06:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:35.649 [2024-11-26 06:22:19.588017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.588 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.588 "name": "raid_bdev1", 00:12:36.588 "uuid": "d6e37360-389d-494a-995a-cb1c60b745e0", 00:12:36.588 "strip_size_kb": 64, 00:12:36.588 "state": "online", 00:12:36.588 "raid_level": "concat", 00:12:36.588 "superblock": true, 00:12:36.588 "num_base_bdevs": 3, 00:12:36.588 "num_base_bdevs_discovered": 3, 00:12:36.588 "num_base_bdevs_operational": 3, 00:12:36.588 "base_bdevs_list": [ 00:12:36.588 { 00:12:36.588 "name": "BaseBdev1", 00:12:36.588 "uuid": "f9f9d89f-7beb-596a-bf74-1751b670f63b", 00:12:36.588 "is_configured": true, 00:12:36.588 "data_offset": 2048, 00:12:36.588 "data_size": 63488 00:12:36.588 }, 00:12:36.588 { 00:12:36.588 "name": "BaseBdev2", 00:12:36.588 "uuid": "3430e764-97ff-52dd-9863-5be80c9e9fe2", 00:12:36.588 "is_configured": true, 00:12:36.588 "data_offset": 2048, 00:12:36.588 "data_size": 63488 00:12:36.588 }, 00:12:36.588 { 00:12:36.588 "name": "BaseBdev3", 00:12:36.588 "uuid": "68793087-e73e-5a39-b2d2-a430e5781f61", 00:12:36.588 "is_configured": true, 00:12:36.589 "data_offset": 2048, 00:12:36.589 "data_size": 63488 00:12:36.589 } 00:12:36.589 ] 00:12:36.589 }' 00:12:36.589 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.589 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.849 [2024-11-26 06:22:20.922243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.849 [2024-11-26 06:22:20.922280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.849 [2024-11-26 06:22:20.925341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.849 [2024-11-26 06:22:20.925393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.849 [2024-11-26 06:22:20.925436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.849 [2024-11-26 06:22:20.925450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:36.849 { 00:12:36.849 "results": [ 00:12:36.849 { 00:12:36.849 "job": "raid_bdev1", 00:12:36.849 "core_mask": "0x1", 00:12:36.849 "workload": "randrw", 00:12:36.849 "percentage": 50, 00:12:36.849 "status": "finished", 00:12:36.849 "queue_depth": 1, 00:12:36.849 "io_size": 131072, 00:12:36.849 "runtime": 1.33423, 00:12:36.849 "iops": 12213.036732797194, 00:12:36.849 "mibps": 1526.6295915996493, 00:12:36.849 "io_failed": 1, 00:12:36.849 "io_timeout": 0, 00:12:36.849 "avg_latency_us": 115.11512820677723, 00:12:36.849 "min_latency_us": 27.165065502183406, 00:12:36.849 "max_latency_us": 1538.235807860262 00:12:36.849 } 00:12:36.849 ], 00:12:36.849 "core_count": 1 00:12:36.849 } 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67683 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67683 ']' 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67683 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67683 00:12:36.849 killing process with pid 67683 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67683' 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67683 00:12:36.849 06:22:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67683 00:12:36.849 [2024-11-26 06:22:20.958750] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.127 [2024-11-26 06:22:21.245709] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.505 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:38.505 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:38.505 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xSqpyqG7uv 00:12:38.765 ************************************ 00:12:38.765 END TEST raid_write_error_test 00:12:38.765 ************************************ 00:12:38.765 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:12:38.765 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:38.765 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.765 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:38.766 06:22:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:12:38.766 00:12:38.766 real 0m4.945s 00:12:38.766 user 0m5.824s 00:12:38.766 sys 0m0.664s 00:12:38.766 06:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.766 06:22:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.766 06:22:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:38.766 06:22:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:38.766 06:22:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:38.766 06:22:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.766 06:22:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.766 ************************************ 00:12:38.766 START TEST raid_state_function_test 00:12:38.766 ************************************ 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:38.766 Process raid pid: 67827 00:12:38.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67827 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67827' 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67827 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67827 ']' 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.766 06:22:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.766 [2024-11-26 06:22:22.833415] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:38.766 [2024-11-26 06:22:22.833688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.025 [2024-11-26 06:22:23.007830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.285 [2024-11-26 06:22:23.160234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.544 [2024-11-26 06:22:23.427181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.544 [2024-11-26 06:22:23.427359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.803 [2024-11-26 06:22:23.712560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.803 [2024-11-26 06:22:23.712684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.803 [2024-11-26 06:22:23.712722] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.803 [2024-11-26 06:22:23.712784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.803 [2024-11-26 06:22:23.712822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.803 [2024-11-26 06:22:23.712864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.803 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.804 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.804 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.804 "name": "Existed_Raid", 00:12:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.804 "strip_size_kb": 0, 00:12:39.804 "state": "configuring", 00:12:39.804 "raid_level": "raid1", 00:12:39.804 "superblock": false, 00:12:39.804 "num_base_bdevs": 3, 00:12:39.804 "num_base_bdevs_discovered": 0, 00:12:39.804 "num_base_bdevs_operational": 3, 00:12:39.804 "base_bdevs_list": [ 00:12:39.804 { 00:12:39.804 "name": "BaseBdev1", 00:12:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.804 "is_configured": false, 00:12:39.804 "data_offset": 0, 00:12:39.804 "data_size": 0 00:12:39.804 }, 00:12:39.804 { 00:12:39.804 "name": "BaseBdev2", 00:12:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.804 "is_configured": false, 00:12:39.804 "data_offset": 0, 00:12:39.804 "data_size": 0 00:12:39.804 }, 00:12:39.804 { 00:12:39.804 "name": "BaseBdev3", 00:12:39.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.804 "is_configured": false, 00:12:39.804 "data_offset": 0, 00:12:39.804 "data_size": 0 00:12:39.804 } 00:12:39.804 ] 00:12:39.804 }' 00:12:39.804 06:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.804 06:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.063 [2024-11-26 06:22:24.115874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.063 [2024-11-26 06:22:24.115920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.063 [2024-11-26 06:22:24.123848] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.063 [2024-11-26 06:22:24.123907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.063 [2024-11-26 06:22:24.123919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.063 [2024-11-26 06:22:24.123931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.063 [2024-11-26 06:22:24.123939] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.063 [2024-11-26 06:22:24.123950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.063 [2024-11-26 06:22:24.183597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.063 BaseBdev1 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.063 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.376 [ 00:12:40.376 { 00:12:40.376 "name": "BaseBdev1", 00:12:40.376 "aliases": [ 00:12:40.376 "135e6e24-e039-457e-b82d-f907b3ccd01b" 00:12:40.376 ], 00:12:40.376 "product_name": "Malloc disk", 00:12:40.376 "block_size": 512, 00:12:40.376 "num_blocks": 65536, 00:12:40.376 "uuid": "135e6e24-e039-457e-b82d-f907b3ccd01b", 00:12:40.376 "assigned_rate_limits": { 00:12:40.376 "rw_ios_per_sec": 0, 00:12:40.376 "rw_mbytes_per_sec": 0, 00:12:40.376 "r_mbytes_per_sec": 0, 00:12:40.376 "w_mbytes_per_sec": 0 00:12:40.376 }, 00:12:40.376 "claimed": true, 00:12:40.376 "claim_type": "exclusive_write", 00:12:40.376 "zoned": false, 00:12:40.376 "supported_io_types": { 00:12:40.376 "read": true, 00:12:40.376 "write": true, 00:12:40.376 "unmap": true, 00:12:40.376 "flush": true, 00:12:40.376 "reset": true, 00:12:40.376 "nvme_admin": false, 00:12:40.376 "nvme_io": false, 00:12:40.376 "nvme_io_md": false, 00:12:40.376 "write_zeroes": true, 00:12:40.376 "zcopy": true, 00:12:40.376 "get_zone_info": false, 00:12:40.376 "zone_management": false, 00:12:40.376 "zone_append": false, 00:12:40.376 "compare": false, 00:12:40.376 "compare_and_write": false, 00:12:40.376 "abort": true, 00:12:40.376 "seek_hole": false, 00:12:40.376 "seek_data": false, 00:12:40.376 "copy": true, 00:12:40.376 "nvme_iov_md": false 00:12:40.376 }, 00:12:40.376 "memory_domains": [ 00:12:40.376 { 00:12:40.376 "dma_device_id": "system", 00:12:40.376 "dma_device_type": 1 00:12:40.376 }, 00:12:40.376 { 00:12:40.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.376 "dma_device_type": 2 00:12:40.376 } 00:12:40.376 ], 00:12:40.376 "driver_specific": {} 00:12:40.376 } 00:12:40.376 ] 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.376 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.376 "name": "Existed_Raid", 00:12:40.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.376 "strip_size_kb": 0, 00:12:40.376 "state": "configuring", 00:12:40.376 "raid_level": "raid1", 00:12:40.376 "superblock": false, 00:12:40.376 "num_base_bdevs": 3, 00:12:40.376 "num_base_bdevs_discovered": 1, 00:12:40.376 "num_base_bdevs_operational": 3, 00:12:40.377 "base_bdevs_list": [ 00:12:40.377 { 00:12:40.377 "name": "BaseBdev1", 00:12:40.377 "uuid": "135e6e24-e039-457e-b82d-f907b3ccd01b", 00:12:40.377 "is_configured": true, 00:12:40.377 "data_offset": 0, 00:12:40.377 "data_size": 65536 00:12:40.377 }, 00:12:40.377 { 00:12:40.377 "name": "BaseBdev2", 00:12:40.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.377 "is_configured": false, 00:12:40.377 "data_offset": 0, 00:12:40.377 "data_size": 0 00:12:40.377 }, 00:12:40.377 { 00:12:40.377 "name": "BaseBdev3", 00:12:40.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.377 "is_configured": false, 00:12:40.377 "data_offset": 0, 00:12:40.377 "data_size": 0 00:12:40.377 } 00:12:40.377 ] 00:12:40.377 }' 00:12:40.377 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.377 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.648 [2024-11-26 06:22:24.670832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.648 [2024-11-26 06:22:24.670968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.648 [2024-11-26 06:22:24.678891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.648 [2024-11-26 06:22:24.681369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.648 [2024-11-26 06:22:24.681460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.648 [2024-11-26 06:22:24.681513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.648 [2024-11-26 06:22:24.681555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.648 "name": "Existed_Raid", 00:12:40.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.648 "strip_size_kb": 0, 00:12:40.648 "state": "configuring", 00:12:40.648 "raid_level": "raid1", 00:12:40.648 "superblock": false, 00:12:40.648 "num_base_bdevs": 3, 00:12:40.648 "num_base_bdevs_discovered": 1, 00:12:40.648 "num_base_bdevs_operational": 3, 00:12:40.648 "base_bdevs_list": [ 00:12:40.648 { 00:12:40.648 "name": "BaseBdev1", 00:12:40.648 "uuid": "135e6e24-e039-457e-b82d-f907b3ccd01b", 00:12:40.648 "is_configured": true, 00:12:40.648 "data_offset": 0, 00:12:40.648 "data_size": 65536 00:12:40.648 }, 00:12:40.648 { 00:12:40.648 "name": "BaseBdev2", 00:12:40.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.648 "is_configured": false, 00:12:40.648 "data_offset": 0, 00:12:40.648 "data_size": 0 00:12:40.648 }, 00:12:40.648 { 00:12:40.648 "name": "BaseBdev3", 00:12:40.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.648 "is_configured": false, 00:12:40.648 "data_offset": 0, 00:12:40.648 "data_size": 0 00:12:40.648 } 00:12:40.648 ] 00:12:40.648 }' 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.648 06:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.216 [2024-11-26 06:22:25.189794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.216 BaseBdev2 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.216 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.216 [ 00:12:41.216 { 00:12:41.216 "name": "BaseBdev2", 00:12:41.216 "aliases": [ 00:12:41.216 "69f75e63-fc4a-4330-9501-97a0aee7c5df" 00:12:41.216 ], 00:12:41.216 "product_name": "Malloc disk", 00:12:41.216 "block_size": 512, 00:12:41.216 "num_blocks": 65536, 00:12:41.216 "uuid": "69f75e63-fc4a-4330-9501-97a0aee7c5df", 00:12:41.216 "assigned_rate_limits": { 00:12:41.216 "rw_ios_per_sec": 0, 00:12:41.216 "rw_mbytes_per_sec": 0, 00:12:41.216 "r_mbytes_per_sec": 0, 00:12:41.216 "w_mbytes_per_sec": 0 00:12:41.216 }, 00:12:41.216 "claimed": true, 00:12:41.216 "claim_type": "exclusive_write", 00:12:41.217 "zoned": false, 00:12:41.217 "supported_io_types": { 00:12:41.217 "read": true, 00:12:41.217 "write": true, 00:12:41.217 "unmap": true, 00:12:41.217 "flush": true, 00:12:41.217 "reset": true, 00:12:41.217 "nvme_admin": false, 00:12:41.217 "nvme_io": false, 00:12:41.217 "nvme_io_md": false, 00:12:41.217 "write_zeroes": true, 00:12:41.217 "zcopy": true, 00:12:41.217 "get_zone_info": false, 00:12:41.217 "zone_management": false, 00:12:41.217 "zone_append": false, 00:12:41.217 "compare": false, 00:12:41.217 "compare_and_write": false, 00:12:41.217 "abort": true, 00:12:41.217 "seek_hole": false, 00:12:41.217 "seek_data": false, 00:12:41.217 "copy": true, 00:12:41.217 "nvme_iov_md": false 00:12:41.217 }, 00:12:41.217 "memory_domains": [ 00:12:41.217 { 00:12:41.217 "dma_device_id": "system", 00:12:41.217 "dma_device_type": 1 00:12:41.217 }, 00:12:41.217 { 00:12:41.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.217 "dma_device_type": 2 00:12:41.217 } 00:12:41.217 ], 00:12:41.217 "driver_specific": {} 00:12:41.217 } 00:12:41.217 ] 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.217 "name": "Existed_Raid", 00:12:41.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.217 "strip_size_kb": 0, 00:12:41.217 "state": "configuring", 00:12:41.217 "raid_level": "raid1", 00:12:41.217 "superblock": false, 00:12:41.217 "num_base_bdevs": 3, 00:12:41.217 "num_base_bdevs_discovered": 2, 00:12:41.217 "num_base_bdevs_operational": 3, 00:12:41.217 "base_bdevs_list": [ 00:12:41.217 { 00:12:41.217 "name": "BaseBdev1", 00:12:41.217 "uuid": "135e6e24-e039-457e-b82d-f907b3ccd01b", 00:12:41.217 "is_configured": true, 00:12:41.217 "data_offset": 0, 00:12:41.217 "data_size": 65536 00:12:41.217 }, 00:12:41.217 { 00:12:41.217 "name": "BaseBdev2", 00:12:41.217 "uuid": "69f75e63-fc4a-4330-9501-97a0aee7c5df", 00:12:41.217 "is_configured": true, 00:12:41.217 "data_offset": 0, 00:12:41.217 "data_size": 65536 00:12:41.217 }, 00:12:41.217 { 00:12:41.217 "name": "BaseBdev3", 00:12:41.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.217 "is_configured": false, 00:12:41.217 "data_offset": 0, 00:12:41.217 "data_size": 0 00:12:41.217 } 00:12:41.217 ] 00:12:41.217 }' 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.217 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.783 [2024-11-26 06:22:25.770797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.783 [2024-11-26 06:22:25.770991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:41.783 [2024-11-26 06:22:25.771026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:41.783 [2024-11-26 06:22:25.771485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:41.783 [2024-11-26 06:22:25.771778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:41.783 [2024-11-26 06:22:25.771829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:41.783 [2024-11-26 06:22:25.772219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.783 BaseBdev3 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.783 [ 00:12:41.783 { 00:12:41.783 "name": "BaseBdev3", 00:12:41.783 "aliases": [ 00:12:41.783 "cd4fe8d6-2540-476f-8e61-fa7945db416c" 00:12:41.783 ], 00:12:41.783 "product_name": "Malloc disk", 00:12:41.783 "block_size": 512, 00:12:41.783 "num_blocks": 65536, 00:12:41.783 "uuid": "cd4fe8d6-2540-476f-8e61-fa7945db416c", 00:12:41.783 "assigned_rate_limits": { 00:12:41.783 "rw_ios_per_sec": 0, 00:12:41.783 "rw_mbytes_per_sec": 0, 00:12:41.783 "r_mbytes_per_sec": 0, 00:12:41.783 "w_mbytes_per_sec": 0 00:12:41.783 }, 00:12:41.783 "claimed": true, 00:12:41.783 "claim_type": "exclusive_write", 00:12:41.783 "zoned": false, 00:12:41.783 "supported_io_types": { 00:12:41.783 "read": true, 00:12:41.783 "write": true, 00:12:41.783 "unmap": true, 00:12:41.783 "flush": true, 00:12:41.783 "reset": true, 00:12:41.783 "nvme_admin": false, 00:12:41.783 "nvme_io": false, 00:12:41.783 "nvme_io_md": false, 00:12:41.783 "write_zeroes": true, 00:12:41.783 "zcopy": true, 00:12:41.783 "get_zone_info": false, 00:12:41.783 "zone_management": false, 00:12:41.783 "zone_append": false, 00:12:41.783 "compare": false, 00:12:41.783 "compare_and_write": false, 00:12:41.783 "abort": true, 00:12:41.783 "seek_hole": false, 00:12:41.783 "seek_data": false, 00:12:41.783 "copy": true, 00:12:41.783 "nvme_iov_md": false 00:12:41.783 }, 00:12:41.783 "memory_domains": [ 00:12:41.783 { 00:12:41.783 "dma_device_id": "system", 00:12:41.783 "dma_device_type": 1 00:12:41.783 }, 00:12:41.783 { 00:12:41.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.783 "dma_device_type": 2 00:12:41.783 } 00:12:41.783 ], 00:12:41.783 "driver_specific": {} 00:12:41.783 } 00:12:41.783 ] 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.783 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.784 "name": "Existed_Raid", 00:12:41.784 "uuid": "47c289cc-d9ec-4a63-8305-b4201bfa21a2", 00:12:41.784 "strip_size_kb": 0, 00:12:41.784 "state": "online", 00:12:41.784 "raid_level": "raid1", 00:12:41.784 "superblock": false, 00:12:41.784 "num_base_bdevs": 3, 00:12:41.784 "num_base_bdevs_discovered": 3, 00:12:41.784 "num_base_bdevs_operational": 3, 00:12:41.784 "base_bdevs_list": [ 00:12:41.784 { 00:12:41.784 "name": "BaseBdev1", 00:12:41.784 "uuid": "135e6e24-e039-457e-b82d-f907b3ccd01b", 00:12:41.784 "is_configured": true, 00:12:41.784 "data_offset": 0, 00:12:41.784 "data_size": 65536 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "name": "BaseBdev2", 00:12:41.784 "uuid": "69f75e63-fc4a-4330-9501-97a0aee7c5df", 00:12:41.784 "is_configured": true, 00:12:41.784 "data_offset": 0, 00:12:41.784 "data_size": 65536 00:12:41.784 }, 00:12:41.784 { 00:12:41.784 "name": "BaseBdev3", 00:12:41.784 "uuid": "cd4fe8d6-2540-476f-8e61-fa7945db416c", 00:12:41.784 "is_configured": true, 00:12:41.784 "data_offset": 0, 00:12:41.784 "data_size": 65536 00:12:41.784 } 00:12:41.784 ] 00:12:41.784 }' 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.784 06:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.350 [2024-11-26 06:22:26.238548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.350 "name": "Existed_Raid", 00:12:42.350 "aliases": [ 00:12:42.350 "47c289cc-d9ec-4a63-8305-b4201bfa21a2" 00:12:42.350 ], 00:12:42.350 "product_name": "Raid Volume", 00:12:42.350 "block_size": 512, 00:12:42.350 "num_blocks": 65536, 00:12:42.350 "uuid": "47c289cc-d9ec-4a63-8305-b4201bfa21a2", 00:12:42.350 "assigned_rate_limits": { 00:12:42.350 "rw_ios_per_sec": 0, 00:12:42.350 "rw_mbytes_per_sec": 0, 00:12:42.350 "r_mbytes_per_sec": 0, 00:12:42.350 "w_mbytes_per_sec": 0 00:12:42.350 }, 00:12:42.350 "claimed": false, 00:12:42.350 "zoned": false, 00:12:42.350 "supported_io_types": { 00:12:42.350 "read": true, 00:12:42.350 "write": true, 00:12:42.350 "unmap": false, 00:12:42.350 "flush": false, 00:12:42.350 "reset": true, 00:12:42.350 "nvme_admin": false, 00:12:42.350 "nvme_io": false, 00:12:42.350 "nvme_io_md": false, 00:12:42.350 "write_zeroes": true, 00:12:42.350 "zcopy": false, 00:12:42.350 "get_zone_info": false, 00:12:42.350 "zone_management": false, 00:12:42.350 "zone_append": false, 00:12:42.350 "compare": false, 00:12:42.350 "compare_and_write": false, 00:12:42.350 "abort": false, 00:12:42.350 "seek_hole": false, 00:12:42.350 "seek_data": false, 00:12:42.350 "copy": false, 00:12:42.350 "nvme_iov_md": false 00:12:42.350 }, 00:12:42.350 "memory_domains": [ 00:12:42.350 { 00:12:42.350 "dma_device_id": "system", 00:12:42.350 "dma_device_type": 1 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.350 "dma_device_type": 2 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "dma_device_id": "system", 00:12:42.350 "dma_device_type": 1 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.350 "dma_device_type": 2 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "dma_device_id": "system", 00:12:42.350 "dma_device_type": 1 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.350 "dma_device_type": 2 00:12:42.350 } 00:12:42.350 ], 00:12:42.350 "driver_specific": { 00:12:42.350 "raid": { 00:12:42.350 "uuid": "47c289cc-d9ec-4a63-8305-b4201bfa21a2", 00:12:42.350 "strip_size_kb": 0, 00:12:42.350 "state": "online", 00:12:42.350 "raid_level": "raid1", 00:12:42.350 "superblock": false, 00:12:42.350 "num_base_bdevs": 3, 00:12:42.350 "num_base_bdevs_discovered": 3, 00:12:42.350 "num_base_bdevs_operational": 3, 00:12:42.350 "base_bdevs_list": [ 00:12:42.350 { 00:12:42.350 "name": "BaseBdev1", 00:12:42.350 "uuid": "135e6e24-e039-457e-b82d-f907b3ccd01b", 00:12:42.350 "is_configured": true, 00:12:42.350 "data_offset": 0, 00:12:42.350 "data_size": 65536 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "name": "BaseBdev2", 00:12:42.350 "uuid": "69f75e63-fc4a-4330-9501-97a0aee7c5df", 00:12:42.350 "is_configured": true, 00:12:42.350 "data_offset": 0, 00:12:42.350 "data_size": 65536 00:12:42.350 }, 00:12:42.350 { 00:12:42.350 "name": "BaseBdev3", 00:12:42.350 "uuid": "cd4fe8d6-2540-476f-8e61-fa7945db416c", 00:12:42.350 "is_configured": true, 00:12:42.350 "data_offset": 0, 00:12:42.350 "data_size": 65536 00:12:42.350 } 00:12:42.350 ] 00:12:42.350 } 00:12:42.350 } 00:12:42.350 }' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:42.350 BaseBdev2 00:12:42.350 BaseBdev3' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.610 [2024-11-26 06:22:26.525713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.610 "name": "Existed_Raid", 00:12:42.610 "uuid": "47c289cc-d9ec-4a63-8305-b4201bfa21a2", 00:12:42.610 "strip_size_kb": 0, 00:12:42.610 "state": "online", 00:12:42.610 "raid_level": "raid1", 00:12:42.610 "superblock": false, 00:12:42.610 "num_base_bdevs": 3, 00:12:42.610 "num_base_bdevs_discovered": 2, 00:12:42.610 "num_base_bdevs_operational": 2, 00:12:42.610 "base_bdevs_list": [ 00:12:42.610 { 00:12:42.610 "name": null, 00:12:42.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.610 "is_configured": false, 00:12:42.610 "data_offset": 0, 00:12:42.610 "data_size": 65536 00:12:42.610 }, 00:12:42.610 { 00:12:42.610 "name": "BaseBdev2", 00:12:42.610 "uuid": "69f75e63-fc4a-4330-9501-97a0aee7c5df", 00:12:42.610 "is_configured": true, 00:12:42.610 "data_offset": 0, 00:12:42.610 "data_size": 65536 00:12:42.610 }, 00:12:42.610 { 00:12:42.610 "name": "BaseBdev3", 00:12:42.610 "uuid": "cd4fe8d6-2540-476f-8e61-fa7945db416c", 00:12:42.610 "is_configured": true, 00:12:42.610 "data_offset": 0, 00:12:42.610 "data_size": 65536 00:12:42.610 } 00:12:42.610 ] 00:12:42.610 }' 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.610 06:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.177 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.178 [2024-11-26 06:22:27.158544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.178 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 [2024-11-26 06:22:27.326946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.437 [2024-11-26 06:22:27.327076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.437 [2024-11-26 06:22:27.445589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.437 [2024-11-26 06:22:27.445659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.437 [2024-11-26 06:22:27.445673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 BaseBdev2 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.437 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.696 [ 00:12:43.696 { 00:12:43.696 "name": "BaseBdev2", 00:12:43.696 "aliases": [ 00:12:43.696 "085f7ee3-622a-4d3c-8b4b-e498f11450a8" 00:12:43.696 ], 00:12:43.697 "product_name": "Malloc disk", 00:12:43.697 "block_size": 512, 00:12:43.697 "num_blocks": 65536, 00:12:43.697 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:43.697 "assigned_rate_limits": { 00:12:43.697 "rw_ios_per_sec": 0, 00:12:43.697 "rw_mbytes_per_sec": 0, 00:12:43.697 "r_mbytes_per_sec": 0, 00:12:43.697 "w_mbytes_per_sec": 0 00:12:43.697 }, 00:12:43.697 "claimed": false, 00:12:43.697 "zoned": false, 00:12:43.697 "supported_io_types": { 00:12:43.697 "read": true, 00:12:43.697 "write": true, 00:12:43.697 "unmap": true, 00:12:43.697 "flush": true, 00:12:43.697 "reset": true, 00:12:43.697 "nvme_admin": false, 00:12:43.697 "nvme_io": false, 00:12:43.697 "nvme_io_md": false, 00:12:43.697 "write_zeroes": true, 00:12:43.697 "zcopy": true, 00:12:43.697 "get_zone_info": false, 00:12:43.697 "zone_management": false, 00:12:43.697 "zone_append": false, 00:12:43.697 "compare": false, 00:12:43.697 "compare_and_write": false, 00:12:43.697 "abort": true, 00:12:43.697 "seek_hole": false, 00:12:43.697 "seek_data": false, 00:12:43.697 "copy": true, 00:12:43.697 "nvme_iov_md": false 00:12:43.697 }, 00:12:43.697 "memory_domains": [ 00:12:43.697 { 00:12:43.697 "dma_device_id": "system", 00:12:43.697 "dma_device_type": 1 00:12:43.697 }, 00:12:43.697 { 00:12:43.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.697 "dma_device_type": 2 00:12:43.697 } 00:12:43.697 ], 00:12:43.697 "driver_specific": {} 00:12:43.697 } 00:12:43.697 ] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 BaseBdev3 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 [ 00:12:43.697 { 00:12:43.697 "name": "BaseBdev3", 00:12:43.697 "aliases": [ 00:12:43.697 "56103ba9-10c9-4d63-af67-414203d31b51" 00:12:43.697 ], 00:12:43.697 "product_name": "Malloc disk", 00:12:43.697 "block_size": 512, 00:12:43.697 "num_blocks": 65536, 00:12:43.697 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:43.697 "assigned_rate_limits": { 00:12:43.697 "rw_ios_per_sec": 0, 00:12:43.697 "rw_mbytes_per_sec": 0, 00:12:43.697 "r_mbytes_per_sec": 0, 00:12:43.697 "w_mbytes_per_sec": 0 00:12:43.697 }, 00:12:43.697 "claimed": false, 00:12:43.697 "zoned": false, 00:12:43.697 "supported_io_types": { 00:12:43.697 "read": true, 00:12:43.697 "write": true, 00:12:43.697 "unmap": true, 00:12:43.697 "flush": true, 00:12:43.697 "reset": true, 00:12:43.697 "nvme_admin": false, 00:12:43.697 "nvme_io": false, 00:12:43.697 "nvme_io_md": false, 00:12:43.697 "write_zeroes": true, 00:12:43.697 "zcopy": true, 00:12:43.697 "get_zone_info": false, 00:12:43.697 "zone_management": false, 00:12:43.697 "zone_append": false, 00:12:43.697 "compare": false, 00:12:43.697 "compare_and_write": false, 00:12:43.697 "abort": true, 00:12:43.697 "seek_hole": false, 00:12:43.697 "seek_data": false, 00:12:43.697 "copy": true, 00:12:43.697 "nvme_iov_md": false 00:12:43.697 }, 00:12:43.697 "memory_domains": [ 00:12:43.697 { 00:12:43.697 "dma_device_id": "system", 00:12:43.697 "dma_device_type": 1 00:12:43.697 }, 00:12:43.697 { 00:12:43.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.697 "dma_device_type": 2 00:12:43.697 } 00:12:43.697 ], 00:12:43.697 "driver_specific": {} 00:12:43.697 } 00:12:43.697 ] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 [2024-11-26 06:22:27.656519] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.697 [2024-11-26 06:22:27.656650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.697 [2024-11-26 06:22:27.656717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.697 [2024-11-26 06:22:27.659288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.697 "name": "Existed_Raid", 00:12:43.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.697 "strip_size_kb": 0, 00:12:43.697 "state": "configuring", 00:12:43.697 "raid_level": "raid1", 00:12:43.697 "superblock": false, 00:12:43.697 "num_base_bdevs": 3, 00:12:43.697 "num_base_bdevs_discovered": 2, 00:12:43.697 "num_base_bdevs_operational": 3, 00:12:43.697 "base_bdevs_list": [ 00:12:43.697 { 00:12:43.697 "name": "BaseBdev1", 00:12:43.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.697 "is_configured": false, 00:12:43.697 "data_offset": 0, 00:12:43.697 "data_size": 0 00:12:43.697 }, 00:12:43.697 { 00:12:43.697 "name": "BaseBdev2", 00:12:43.697 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:43.697 "is_configured": true, 00:12:43.697 "data_offset": 0, 00:12:43.697 "data_size": 65536 00:12:43.697 }, 00:12:43.697 { 00:12:43.697 "name": "BaseBdev3", 00:12:43.697 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:43.697 "is_configured": true, 00:12:43.697 "data_offset": 0, 00:12:43.697 "data_size": 65536 00:12:43.697 } 00:12:43.697 ] 00:12:43.697 }' 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.697 06:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.266 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.267 [2024-11-26 06:22:28.115807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.267 "name": "Existed_Raid", 00:12:44.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.267 "strip_size_kb": 0, 00:12:44.267 "state": "configuring", 00:12:44.267 "raid_level": "raid1", 00:12:44.267 "superblock": false, 00:12:44.267 "num_base_bdevs": 3, 00:12:44.267 "num_base_bdevs_discovered": 1, 00:12:44.267 "num_base_bdevs_operational": 3, 00:12:44.267 "base_bdevs_list": [ 00:12:44.267 { 00:12:44.267 "name": "BaseBdev1", 00:12:44.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.267 "is_configured": false, 00:12:44.267 "data_offset": 0, 00:12:44.267 "data_size": 0 00:12:44.267 }, 00:12:44.267 { 00:12:44.267 "name": null, 00:12:44.267 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:44.267 "is_configured": false, 00:12:44.267 "data_offset": 0, 00:12:44.267 "data_size": 65536 00:12:44.267 }, 00:12:44.267 { 00:12:44.267 "name": "BaseBdev3", 00:12:44.267 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:44.267 "is_configured": true, 00:12:44.267 "data_offset": 0, 00:12:44.267 "data_size": 65536 00:12:44.267 } 00:12:44.267 ] 00:12:44.267 }' 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.267 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.527 [2024-11-26 06:22:28.611047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.527 BaseBdev1 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.527 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.527 [ 00:12:44.527 { 00:12:44.527 "name": "BaseBdev1", 00:12:44.527 "aliases": [ 00:12:44.528 "5c2a6f10-38cf-4d59-bf23-7d90d4a59619" 00:12:44.528 ], 00:12:44.528 "product_name": "Malloc disk", 00:12:44.528 "block_size": 512, 00:12:44.528 "num_blocks": 65536, 00:12:44.528 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:44.528 "assigned_rate_limits": { 00:12:44.528 "rw_ios_per_sec": 0, 00:12:44.528 "rw_mbytes_per_sec": 0, 00:12:44.528 "r_mbytes_per_sec": 0, 00:12:44.528 "w_mbytes_per_sec": 0 00:12:44.528 }, 00:12:44.528 "claimed": true, 00:12:44.528 "claim_type": "exclusive_write", 00:12:44.528 "zoned": false, 00:12:44.528 "supported_io_types": { 00:12:44.528 "read": true, 00:12:44.528 "write": true, 00:12:44.528 "unmap": true, 00:12:44.528 "flush": true, 00:12:44.528 "reset": true, 00:12:44.528 "nvme_admin": false, 00:12:44.528 "nvme_io": false, 00:12:44.528 "nvme_io_md": false, 00:12:44.528 "write_zeroes": true, 00:12:44.528 "zcopy": true, 00:12:44.528 "get_zone_info": false, 00:12:44.528 "zone_management": false, 00:12:44.528 "zone_append": false, 00:12:44.528 "compare": false, 00:12:44.528 "compare_and_write": false, 00:12:44.528 "abort": true, 00:12:44.528 "seek_hole": false, 00:12:44.528 "seek_data": false, 00:12:44.528 "copy": true, 00:12:44.528 "nvme_iov_md": false 00:12:44.528 }, 00:12:44.528 "memory_domains": [ 00:12:44.528 { 00:12:44.528 "dma_device_id": "system", 00:12:44.528 "dma_device_type": 1 00:12:44.528 }, 00:12:44.528 { 00:12:44.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.528 "dma_device_type": 2 00:12:44.528 } 00:12:44.528 ], 00:12:44.528 "driver_specific": {} 00:12:44.528 } 00:12:44.528 ] 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.528 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.838 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.838 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.838 "name": "Existed_Raid", 00:12:44.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.838 "strip_size_kb": 0, 00:12:44.838 "state": "configuring", 00:12:44.838 "raid_level": "raid1", 00:12:44.838 "superblock": false, 00:12:44.838 "num_base_bdevs": 3, 00:12:44.838 "num_base_bdevs_discovered": 2, 00:12:44.838 "num_base_bdevs_operational": 3, 00:12:44.838 "base_bdevs_list": [ 00:12:44.838 { 00:12:44.838 "name": "BaseBdev1", 00:12:44.838 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:44.838 "is_configured": true, 00:12:44.838 "data_offset": 0, 00:12:44.838 "data_size": 65536 00:12:44.838 }, 00:12:44.838 { 00:12:44.838 "name": null, 00:12:44.838 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:44.838 "is_configured": false, 00:12:44.838 "data_offset": 0, 00:12:44.838 "data_size": 65536 00:12:44.838 }, 00:12:44.838 { 00:12:44.838 "name": "BaseBdev3", 00:12:44.838 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:44.838 "is_configured": true, 00:12:44.838 "data_offset": 0, 00:12:44.838 "data_size": 65536 00:12:44.838 } 00:12:44.838 ] 00:12:44.838 }' 00:12:44.838 06:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.838 06:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.097 [2024-11-26 06:22:29.134252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.097 "name": "Existed_Raid", 00:12:45.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.097 "strip_size_kb": 0, 00:12:45.097 "state": "configuring", 00:12:45.097 "raid_level": "raid1", 00:12:45.097 "superblock": false, 00:12:45.097 "num_base_bdevs": 3, 00:12:45.097 "num_base_bdevs_discovered": 1, 00:12:45.097 "num_base_bdevs_operational": 3, 00:12:45.097 "base_bdevs_list": [ 00:12:45.097 { 00:12:45.097 "name": "BaseBdev1", 00:12:45.097 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:45.097 "is_configured": true, 00:12:45.097 "data_offset": 0, 00:12:45.097 "data_size": 65536 00:12:45.097 }, 00:12:45.097 { 00:12:45.097 "name": null, 00:12:45.097 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:45.097 "is_configured": false, 00:12:45.097 "data_offset": 0, 00:12:45.097 "data_size": 65536 00:12:45.097 }, 00:12:45.097 { 00:12:45.097 "name": null, 00:12:45.097 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:45.097 "is_configured": false, 00:12:45.097 "data_offset": 0, 00:12:45.097 "data_size": 65536 00:12:45.097 } 00:12:45.097 ] 00:12:45.097 }' 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.097 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 [2024-11-26 06:22:29.597559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.666 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.666 "name": "Existed_Raid", 00:12:45.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.666 "strip_size_kb": 0, 00:12:45.666 "state": "configuring", 00:12:45.666 "raid_level": "raid1", 00:12:45.666 "superblock": false, 00:12:45.666 "num_base_bdevs": 3, 00:12:45.666 "num_base_bdevs_discovered": 2, 00:12:45.666 "num_base_bdevs_operational": 3, 00:12:45.666 "base_bdevs_list": [ 00:12:45.666 { 00:12:45.666 "name": "BaseBdev1", 00:12:45.666 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:45.667 "is_configured": true, 00:12:45.667 "data_offset": 0, 00:12:45.667 "data_size": 65536 00:12:45.667 }, 00:12:45.667 { 00:12:45.667 "name": null, 00:12:45.667 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:45.667 "is_configured": false, 00:12:45.667 "data_offset": 0, 00:12:45.667 "data_size": 65536 00:12:45.667 }, 00:12:45.667 { 00:12:45.667 "name": "BaseBdev3", 00:12:45.667 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:45.667 "is_configured": true, 00:12:45.667 "data_offset": 0, 00:12:45.667 "data_size": 65536 00:12:45.667 } 00:12:45.667 ] 00:12:45.667 }' 00:12:45.667 06:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.667 06:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.926 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.926 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.926 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.926 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.926 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.184 [2024-11-26 06:22:30.080750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.184 "name": "Existed_Raid", 00:12:46.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.184 "strip_size_kb": 0, 00:12:46.184 "state": "configuring", 00:12:46.184 "raid_level": "raid1", 00:12:46.184 "superblock": false, 00:12:46.184 "num_base_bdevs": 3, 00:12:46.184 "num_base_bdevs_discovered": 1, 00:12:46.184 "num_base_bdevs_operational": 3, 00:12:46.184 "base_bdevs_list": [ 00:12:46.184 { 00:12:46.184 "name": null, 00:12:46.184 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:46.184 "is_configured": false, 00:12:46.184 "data_offset": 0, 00:12:46.184 "data_size": 65536 00:12:46.184 }, 00:12:46.184 { 00:12:46.184 "name": null, 00:12:46.184 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:46.184 "is_configured": false, 00:12:46.184 "data_offset": 0, 00:12:46.184 "data_size": 65536 00:12:46.184 }, 00:12:46.184 { 00:12:46.184 "name": "BaseBdev3", 00:12:46.184 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:46.184 "is_configured": true, 00:12:46.184 "data_offset": 0, 00:12:46.184 "data_size": 65536 00:12:46.184 } 00:12:46.184 ] 00:12:46.184 }' 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.184 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 [2024-11-26 06:22:30.698614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.750 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.750 "name": "Existed_Raid", 00:12:46.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.750 "strip_size_kb": 0, 00:12:46.750 "state": "configuring", 00:12:46.750 "raid_level": "raid1", 00:12:46.750 "superblock": false, 00:12:46.750 "num_base_bdevs": 3, 00:12:46.750 "num_base_bdevs_discovered": 2, 00:12:46.750 "num_base_bdevs_operational": 3, 00:12:46.750 "base_bdevs_list": [ 00:12:46.750 { 00:12:46.750 "name": null, 00:12:46.750 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:46.751 "is_configured": false, 00:12:46.751 "data_offset": 0, 00:12:46.751 "data_size": 65536 00:12:46.751 }, 00:12:46.751 { 00:12:46.751 "name": "BaseBdev2", 00:12:46.751 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:46.751 "is_configured": true, 00:12:46.751 "data_offset": 0, 00:12:46.751 "data_size": 65536 00:12:46.751 }, 00:12:46.751 { 00:12:46.751 "name": "BaseBdev3", 00:12:46.751 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:46.751 "is_configured": true, 00:12:46.751 "data_offset": 0, 00:12:46.751 "data_size": 65536 00:12:46.751 } 00:12:46.751 ] 00:12:46.751 }' 00:12:46.751 06:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.751 06:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.317 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5c2a6f10-38cf-4d59-bf23-7d90d4a59619 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 [2024-11-26 06:22:31.300720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:47.318 [2024-11-26 06:22:31.300792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:47.318 [2024-11-26 06:22:31.300801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:47.318 [2024-11-26 06:22:31.301152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:47.318 [2024-11-26 06:22:31.301366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:47.318 [2024-11-26 06:22:31.301393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:47.318 NewBaseBdev 00:12:47.318 [2024-11-26 06:22:31.301780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 [ 00:12:47.318 { 00:12:47.318 "name": "NewBaseBdev", 00:12:47.318 "aliases": [ 00:12:47.318 "5c2a6f10-38cf-4d59-bf23-7d90d4a59619" 00:12:47.318 ], 00:12:47.318 "product_name": "Malloc disk", 00:12:47.318 "block_size": 512, 00:12:47.318 "num_blocks": 65536, 00:12:47.318 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:47.318 "assigned_rate_limits": { 00:12:47.318 "rw_ios_per_sec": 0, 00:12:47.318 "rw_mbytes_per_sec": 0, 00:12:47.318 "r_mbytes_per_sec": 0, 00:12:47.318 "w_mbytes_per_sec": 0 00:12:47.318 }, 00:12:47.318 "claimed": true, 00:12:47.318 "claim_type": "exclusive_write", 00:12:47.318 "zoned": false, 00:12:47.318 "supported_io_types": { 00:12:47.318 "read": true, 00:12:47.318 "write": true, 00:12:47.318 "unmap": true, 00:12:47.318 "flush": true, 00:12:47.318 "reset": true, 00:12:47.318 "nvme_admin": false, 00:12:47.318 "nvme_io": false, 00:12:47.318 "nvme_io_md": false, 00:12:47.318 "write_zeroes": true, 00:12:47.318 "zcopy": true, 00:12:47.318 "get_zone_info": false, 00:12:47.318 "zone_management": false, 00:12:47.318 "zone_append": false, 00:12:47.318 "compare": false, 00:12:47.318 "compare_and_write": false, 00:12:47.318 "abort": true, 00:12:47.318 "seek_hole": false, 00:12:47.318 "seek_data": false, 00:12:47.318 "copy": true, 00:12:47.318 "nvme_iov_md": false 00:12:47.318 }, 00:12:47.318 "memory_domains": [ 00:12:47.318 { 00:12:47.318 "dma_device_id": "system", 00:12:47.318 "dma_device_type": 1 00:12:47.318 }, 00:12:47.318 { 00:12:47.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.318 "dma_device_type": 2 00:12:47.318 } 00:12:47.318 ], 00:12:47.318 "driver_specific": {} 00:12:47.318 } 00:12:47.318 ] 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.318 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.318 "name": "Existed_Raid", 00:12:47.318 "uuid": "d93f9159-4d20-4f25-bdfb-9c6109ec352e", 00:12:47.318 "strip_size_kb": 0, 00:12:47.318 "state": "online", 00:12:47.318 "raid_level": "raid1", 00:12:47.318 "superblock": false, 00:12:47.318 "num_base_bdevs": 3, 00:12:47.318 "num_base_bdevs_discovered": 3, 00:12:47.318 "num_base_bdevs_operational": 3, 00:12:47.318 "base_bdevs_list": [ 00:12:47.318 { 00:12:47.318 "name": "NewBaseBdev", 00:12:47.318 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:47.318 "is_configured": true, 00:12:47.318 "data_offset": 0, 00:12:47.318 "data_size": 65536 00:12:47.318 }, 00:12:47.318 { 00:12:47.318 "name": "BaseBdev2", 00:12:47.318 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:47.318 "is_configured": true, 00:12:47.318 "data_offset": 0, 00:12:47.318 "data_size": 65536 00:12:47.318 }, 00:12:47.318 { 00:12:47.319 "name": "BaseBdev3", 00:12:47.319 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:47.319 "is_configured": true, 00:12:47.319 "data_offset": 0, 00:12:47.319 "data_size": 65536 00:12:47.319 } 00:12:47.319 ] 00:12:47.319 }' 00:12:47.319 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.319 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 [2024-11-26 06:22:31.796355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.886 "name": "Existed_Raid", 00:12:47.886 "aliases": [ 00:12:47.886 "d93f9159-4d20-4f25-bdfb-9c6109ec352e" 00:12:47.886 ], 00:12:47.886 "product_name": "Raid Volume", 00:12:47.886 "block_size": 512, 00:12:47.886 "num_blocks": 65536, 00:12:47.886 "uuid": "d93f9159-4d20-4f25-bdfb-9c6109ec352e", 00:12:47.886 "assigned_rate_limits": { 00:12:47.886 "rw_ios_per_sec": 0, 00:12:47.886 "rw_mbytes_per_sec": 0, 00:12:47.886 "r_mbytes_per_sec": 0, 00:12:47.886 "w_mbytes_per_sec": 0 00:12:47.886 }, 00:12:47.886 "claimed": false, 00:12:47.886 "zoned": false, 00:12:47.886 "supported_io_types": { 00:12:47.886 "read": true, 00:12:47.886 "write": true, 00:12:47.886 "unmap": false, 00:12:47.886 "flush": false, 00:12:47.886 "reset": true, 00:12:47.886 "nvme_admin": false, 00:12:47.886 "nvme_io": false, 00:12:47.886 "nvme_io_md": false, 00:12:47.886 "write_zeroes": true, 00:12:47.886 "zcopy": false, 00:12:47.886 "get_zone_info": false, 00:12:47.886 "zone_management": false, 00:12:47.886 "zone_append": false, 00:12:47.886 "compare": false, 00:12:47.886 "compare_and_write": false, 00:12:47.886 "abort": false, 00:12:47.886 "seek_hole": false, 00:12:47.886 "seek_data": false, 00:12:47.886 "copy": false, 00:12:47.886 "nvme_iov_md": false 00:12:47.886 }, 00:12:47.886 "memory_domains": [ 00:12:47.886 { 00:12:47.886 "dma_device_id": "system", 00:12:47.886 "dma_device_type": 1 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.886 "dma_device_type": 2 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "dma_device_id": "system", 00:12:47.886 "dma_device_type": 1 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.886 "dma_device_type": 2 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "dma_device_id": "system", 00:12:47.886 "dma_device_type": 1 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.886 "dma_device_type": 2 00:12:47.886 } 00:12:47.886 ], 00:12:47.886 "driver_specific": { 00:12:47.886 "raid": { 00:12:47.886 "uuid": "d93f9159-4d20-4f25-bdfb-9c6109ec352e", 00:12:47.886 "strip_size_kb": 0, 00:12:47.886 "state": "online", 00:12:47.886 "raid_level": "raid1", 00:12:47.886 "superblock": false, 00:12:47.886 "num_base_bdevs": 3, 00:12:47.886 "num_base_bdevs_discovered": 3, 00:12:47.886 "num_base_bdevs_operational": 3, 00:12:47.886 "base_bdevs_list": [ 00:12:47.886 { 00:12:47.886 "name": "NewBaseBdev", 00:12:47.886 "uuid": "5c2a6f10-38cf-4d59-bf23-7d90d4a59619", 00:12:47.886 "is_configured": true, 00:12:47.886 "data_offset": 0, 00:12:47.886 "data_size": 65536 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "name": "BaseBdev2", 00:12:47.886 "uuid": "085f7ee3-622a-4d3c-8b4b-e498f11450a8", 00:12:47.886 "is_configured": true, 00:12:47.886 "data_offset": 0, 00:12:47.886 "data_size": 65536 00:12:47.886 }, 00:12:47.886 { 00:12:47.886 "name": "BaseBdev3", 00:12:47.886 "uuid": "56103ba9-10c9-4d63-af67-414203d31b51", 00:12:47.886 "is_configured": true, 00:12:47.886 "data_offset": 0, 00:12:47.886 "data_size": 65536 00:12:47.886 } 00:12:47.886 ] 00:12:47.886 } 00:12:47.886 } 00:12:47.886 }' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:47.886 BaseBdev2 00:12:47.886 BaseBdev3' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.886 06:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.887 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.145 [2024-11-26 06:22:32.059608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.145 [2024-11-26 06:22:32.059735] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.145 [2024-11-26 06:22:32.059875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.145 [2024-11-26 06:22:32.060271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.145 [2024-11-26 06:22:32.060288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67827 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67827 ']' 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67827 00:12:48.145 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67827 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67827' 00:12:48.146 killing process with pid 67827 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67827 00:12:48.146 [2024-11-26 06:22:32.110534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.146 06:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67827 00:12:48.404 [2024-11-26 06:22:32.477943] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.780 06:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:49.780 00:12:49.781 real 0m11.099s 00:12:49.781 user 0m17.195s 00:12:49.781 sys 0m1.993s 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.781 ************************************ 00:12:49.781 END TEST raid_state_function_test 00:12:49.781 ************************************ 00:12:49.781 06:22:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:49.781 06:22:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:49.781 06:22:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.781 06:22:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.781 ************************************ 00:12:49.781 START TEST raid_state_function_test_sb 00:12:49.781 ************************************ 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68456 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68456' 00:12:49.781 Process raid pid: 68456 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68456 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68456 ']' 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.781 06:22:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.040 [2024-11-26 06:22:33.988354] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:12:50.040 [2024-11-26 06:22:33.988610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.300 [2024-11-26 06:22:34.173163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.300 [2024-11-26 06:22:34.322880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.560 [2024-11-26 06:22:34.578496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.560 [2024-11-26 06:22:34.578675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.819 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.819 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.820 [2024-11-26 06:22:34.862604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.820 [2024-11-26 06:22:34.862752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.820 [2024-11-26 06:22:34.862791] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.820 [2024-11-26 06:22:34.862821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.820 [2024-11-26 06:22:34.862843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.820 [2024-11-26 06:22:34.862905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.820 "name": "Existed_Raid", 00:12:50.820 "uuid": "d11e9978-88dd-4adf-8bb7-f5a7f450716b", 00:12:50.820 "strip_size_kb": 0, 00:12:50.820 "state": "configuring", 00:12:50.820 "raid_level": "raid1", 00:12:50.820 "superblock": true, 00:12:50.820 "num_base_bdevs": 3, 00:12:50.820 "num_base_bdevs_discovered": 0, 00:12:50.820 "num_base_bdevs_operational": 3, 00:12:50.820 "base_bdevs_list": [ 00:12:50.820 { 00:12:50.820 "name": "BaseBdev1", 00:12:50.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.820 "is_configured": false, 00:12:50.820 "data_offset": 0, 00:12:50.820 "data_size": 0 00:12:50.820 }, 00:12:50.820 { 00:12:50.820 "name": "BaseBdev2", 00:12:50.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.820 "is_configured": false, 00:12:50.820 "data_offset": 0, 00:12:50.820 "data_size": 0 00:12:50.820 }, 00:12:50.820 { 00:12:50.820 "name": "BaseBdev3", 00:12:50.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.820 "is_configured": false, 00:12:50.820 "data_offset": 0, 00:12:50.820 "data_size": 0 00:12:50.820 } 00:12:50.820 ] 00:12:50.820 }' 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.820 06:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 [2024-11-26 06:22:35.281867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.390 [2024-11-26 06:22:35.281914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 [2024-11-26 06:22:35.289837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.390 [2024-11-26 06:22:35.289891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.390 [2024-11-26 06:22:35.289901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.390 [2024-11-26 06:22:35.289912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.390 [2024-11-26 06:22:35.289918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.390 [2024-11-26 06:22:35.289928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 [2024-11-26 06:22:35.345103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.390 BaseBdev1 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.390 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.390 [ 00:12:51.390 { 00:12:51.390 "name": "BaseBdev1", 00:12:51.390 "aliases": [ 00:12:51.390 "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06" 00:12:51.390 ], 00:12:51.390 "product_name": "Malloc disk", 00:12:51.390 "block_size": 512, 00:12:51.390 "num_blocks": 65536, 00:12:51.390 "uuid": "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06", 00:12:51.390 "assigned_rate_limits": { 00:12:51.390 "rw_ios_per_sec": 0, 00:12:51.390 "rw_mbytes_per_sec": 0, 00:12:51.390 "r_mbytes_per_sec": 0, 00:12:51.390 "w_mbytes_per_sec": 0 00:12:51.390 }, 00:12:51.390 "claimed": true, 00:12:51.390 "claim_type": "exclusive_write", 00:12:51.390 "zoned": false, 00:12:51.390 "supported_io_types": { 00:12:51.390 "read": true, 00:12:51.390 "write": true, 00:12:51.390 "unmap": true, 00:12:51.390 "flush": true, 00:12:51.390 "reset": true, 00:12:51.390 "nvme_admin": false, 00:12:51.390 "nvme_io": false, 00:12:51.390 "nvme_io_md": false, 00:12:51.390 "write_zeroes": true, 00:12:51.390 "zcopy": true, 00:12:51.390 "get_zone_info": false, 00:12:51.390 "zone_management": false, 00:12:51.390 "zone_append": false, 00:12:51.390 "compare": false, 00:12:51.390 "compare_and_write": false, 00:12:51.390 "abort": true, 00:12:51.390 "seek_hole": false, 00:12:51.390 "seek_data": false, 00:12:51.390 "copy": true, 00:12:51.390 "nvme_iov_md": false 00:12:51.390 }, 00:12:51.390 "memory_domains": [ 00:12:51.390 { 00:12:51.390 "dma_device_id": "system", 00:12:51.390 "dma_device_type": 1 00:12:51.390 }, 00:12:51.390 { 00:12:51.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.391 "dma_device_type": 2 00:12:51.391 } 00:12:51.391 ], 00:12:51.391 "driver_specific": {} 00:12:51.391 } 00:12:51.391 ] 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.391 "name": "Existed_Raid", 00:12:51.391 "uuid": "93d9d88e-4060-455b-a7f2-54fd0686344a", 00:12:51.391 "strip_size_kb": 0, 00:12:51.391 "state": "configuring", 00:12:51.391 "raid_level": "raid1", 00:12:51.391 "superblock": true, 00:12:51.391 "num_base_bdevs": 3, 00:12:51.391 "num_base_bdevs_discovered": 1, 00:12:51.391 "num_base_bdevs_operational": 3, 00:12:51.391 "base_bdevs_list": [ 00:12:51.391 { 00:12:51.391 "name": "BaseBdev1", 00:12:51.391 "uuid": "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06", 00:12:51.391 "is_configured": true, 00:12:51.391 "data_offset": 2048, 00:12:51.391 "data_size": 63488 00:12:51.391 }, 00:12:51.391 { 00:12:51.391 "name": "BaseBdev2", 00:12:51.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.391 "is_configured": false, 00:12:51.391 "data_offset": 0, 00:12:51.391 "data_size": 0 00:12:51.391 }, 00:12:51.391 { 00:12:51.391 "name": "BaseBdev3", 00:12:51.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.391 "is_configured": false, 00:12:51.391 "data_offset": 0, 00:12:51.391 "data_size": 0 00:12:51.391 } 00:12:51.391 ] 00:12:51.391 }' 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.391 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.961 [2024-11-26 06:22:35.860292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.961 [2024-11-26 06:22:35.860375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.961 [2024-11-26 06:22:35.872358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.961 [2024-11-26 06:22:35.874919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.961 [2024-11-26 06:22:35.874973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.961 [2024-11-26 06:22:35.874984] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.961 [2024-11-26 06:22:35.874993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.961 "name": "Existed_Raid", 00:12:51.961 "uuid": "2e52d7c5-2e00-4cdc-aefb-fd51205d1240", 00:12:51.961 "strip_size_kb": 0, 00:12:51.961 "state": "configuring", 00:12:51.961 "raid_level": "raid1", 00:12:51.961 "superblock": true, 00:12:51.961 "num_base_bdevs": 3, 00:12:51.961 "num_base_bdevs_discovered": 1, 00:12:51.961 "num_base_bdevs_operational": 3, 00:12:51.961 "base_bdevs_list": [ 00:12:51.961 { 00:12:51.961 "name": "BaseBdev1", 00:12:51.961 "uuid": "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06", 00:12:51.961 "is_configured": true, 00:12:51.961 "data_offset": 2048, 00:12:51.961 "data_size": 63488 00:12:51.961 }, 00:12:51.961 { 00:12:51.961 "name": "BaseBdev2", 00:12:51.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.961 "is_configured": false, 00:12:51.961 "data_offset": 0, 00:12:51.961 "data_size": 0 00:12:51.961 }, 00:12:51.961 { 00:12:51.961 "name": "BaseBdev3", 00:12:51.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.961 "is_configured": false, 00:12:51.961 "data_offset": 0, 00:12:51.961 "data_size": 0 00:12:51.961 } 00:12:51.961 ] 00:12:51.961 }' 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.961 06:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.221 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.221 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.221 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.481 [2024-11-26 06:22:36.400541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.481 BaseBdev2 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.481 [ 00:12:52.481 { 00:12:52.481 "name": "BaseBdev2", 00:12:52.481 "aliases": [ 00:12:52.481 "76cf3414-77bc-44a5-b2b3-f03ee4b76618" 00:12:52.481 ], 00:12:52.481 "product_name": "Malloc disk", 00:12:52.481 "block_size": 512, 00:12:52.481 "num_blocks": 65536, 00:12:52.481 "uuid": "76cf3414-77bc-44a5-b2b3-f03ee4b76618", 00:12:52.481 "assigned_rate_limits": { 00:12:52.481 "rw_ios_per_sec": 0, 00:12:52.481 "rw_mbytes_per_sec": 0, 00:12:52.481 "r_mbytes_per_sec": 0, 00:12:52.481 "w_mbytes_per_sec": 0 00:12:52.481 }, 00:12:52.481 "claimed": true, 00:12:52.481 "claim_type": "exclusive_write", 00:12:52.481 "zoned": false, 00:12:52.481 "supported_io_types": { 00:12:52.481 "read": true, 00:12:52.481 "write": true, 00:12:52.481 "unmap": true, 00:12:52.481 "flush": true, 00:12:52.481 "reset": true, 00:12:52.481 "nvme_admin": false, 00:12:52.481 "nvme_io": false, 00:12:52.481 "nvme_io_md": false, 00:12:52.481 "write_zeroes": true, 00:12:52.481 "zcopy": true, 00:12:52.481 "get_zone_info": false, 00:12:52.481 "zone_management": false, 00:12:52.481 "zone_append": false, 00:12:52.481 "compare": false, 00:12:52.481 "compare_and_write": false, 00:12:52.481 "abort": true, 00:12:52.481 "seek_hole": false, 00:12:52.481 "seek_data": false, 00:12:52.481 "copy": true, 00:12:52.481 "nvme_iov_md": false 00:12:52.481 }, 00:12:52.481 "memory_domains": [ 00:12:52.481 { 00:12:52.481 "dma_device_id": "system", 00:12:52.481 "dma_device_type": 1 00:12:52.481 }, 00:12:52.481 { 00:12:52.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.481 "dma_device_type": 2 00:12:52.481 } 00:12:52.481 ], 00:12:52.481 "driver_specific": {} 00:12:52.481 } 00:12:52.481 ] 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.481 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.481 "name": "Existed_Raid", 00:12:52.481 "uuid": "2e52d7c5-2e00-4cdc-aefb-fd51205d1240", 00:12:52.481 "strip_size_kb": 0, 00:12:52.481 "state": "configuring", 00:12:52.481 "raid_level": "raid1", 00:12:52.481 "superblock": true, 00:12:52.481 "num_base_bdevs": 3, 00:12:52.481 "num_base_bdevs_discovered": 2, 00:12:52.481 "num_base_bdevs_operational": 3, 00:12:52.481 "base_bdevs_list": [ 00:12:52.481 { 00:12:52.481 "name": "BaseBdev1", 00:12:52.481 "uuid": "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06", 00:12:52.481 "is_configured": true, 00:12:52.481 "data_offset": 2048, 00:12:52.481 "data_size": 63488 00:12:52.481 }, 00:12:52.481 { 00:12:52.481 "name": "BaseBdev2", 00:12:52.481 "uuid": "76cf3414-77bc-44a5-b2b3-f03ee4b76618", 00:12:52.481 "is_configured": true, 00:12:52.481 "data_offset": 2048, 00:12:52.482 "data_size": 63488 00:12:52.482 }, 00:12:52.482 { 00:12:52.482 "name": "BaseBdev3", 00:12:52.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.482 "is_configured": false, 00:12:52.482 "data_offset": 0, 00:12:52.482 "data_size": 0 00:12:52.482 } 00:12:52.482 ] 00:12:52.482 }' 00:12:52.482 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.482 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.051 [2024-11-26 06:22:36.992920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.051 [2024-11-26 06:22:36.993328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:53.051 [2024-11-26 06:22:36.993360] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:53.051 [2024-11-26 06:22:36.993750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:53.051 [2024-11-26 06:22:36.993954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:53.051 [2024-11-26 06:22:36.993965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:53.051 BaseBdev3 00:12:53.051 [2024-11-26 06:22:36.994192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.051 06:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.051 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.051 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:53.051 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.051 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.051 [ 00:12:53.051 { 00:12:53.051 "name": "BaseBdev3", 00:12:53.051 "aliases": [ 00:12:53.051 "0412cacc-481e-4654-8db7-969abf5a854c" 00:12:53.051 ], 00:12:53.051 "product_name": "Malloc disk", 00:12:53.051 "block_size": 512, 00:12:53.051 "num_blocks": 65536, 00:12:53.051 "uuid": "0412cacc-481e-4654-8db7-969abf5a854c", 00:12:53.051 "assigned_rate_limits": { 00:12:53.051 "rw_ios_per_sec": 0, 00:12:53.051 "rw_mbytes_per_sec": 0, 00:12:53.051 "r_mbytes_per_sec": 0, 00:12:53.051 "w_mbytes_per_sec": 0 00:12:53.051 }, 00:12:53.051 "claimed": true, 00:12:53.051 "claim_type": "exclusive_write", 00:12:53.051 "zoned": false, 00:12:53.051 "supported_io_types": { 00:12:53.051 "read": true, 00:12:53.051 "write": true, 00:12:53.052 "unmap": true, 00:12:53.052 "flush": true, 00:12:53.052 "reset": true, 00:12:53.052 "nvme_admin": false, 00:12:53.052 "nvme_io": false, 00:12:53.052 "nvme_io_md": false, 00:12:53.052 "write_zeroes": true, 00:12:53.052 "zcopy": true, 00:12:53.052 "get_zone_info": false, 00:12:53.052 "zone_management": false, 00:12:53.052 "zone_append": false, 00:12:53.052 "compare": false, 00:12:53.052 "compare_and_write": false, 00:12:53.052 "abort": true, 00:12:53.052 "seek_hole": false, 00:12:53.052 "seek_data": false, 00:12:53.052 "copy": true, 00:12:53.052 "nvme_iov_md": false 00:12:53.052 }, 00:12:53.052 "memory_domains": [ 00:12:53.052 { 00:12:53.052 "dma_device_id": "system", 00:12:53.052 "dma_device_type": 1 00:12:53.052 }, 00:12:53.052 { 00:12:53.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.052 "dma_device_type": 2 00:12:53.052 } 00:12:53.052 ], 00:12:53.052 "driver_specific": {} 00:12:53.052 } 00:12:53.052 ] 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.052 "name": "Existed_Raid", 00:12:53.052 "uuid": "2e52d7c5-2e00-4cdc-aefb-fd51205d1240", 00:12:53.052 "strip_size_kb": 0, 00:12:53.052 "state": "online", 00:12:53.052 "raid_level": "raid1", 00:12:53.052 "superblock": true, 00:12:53.052 "num_base_bdevs": 3, 00:12:53.052 "num_base_bdevs_discovered": 3, 00:12:53.052 "num_base_bdevs_operational": 3, 00:12:53.052 "base_bdevs_list": [ 00:12:53.052 { 00:12:53.052 "name": "BaseBdev1", 00:12:53.052 "uuid": "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06", 00:12:53.052 "is_configured": true, 00:12:53.052 "data_offset": 2048, 00:12:53.052 "data_size": 63488 00:12:53.052 }, 00:12:53.052 { 00:12:53.052 "name": "BaseBdev2", 00:12:53.052 "uuid": "76cf3414-77bc-44a5-b2b3-f03ee4b76618", 00:12:53.052 "is_configured": true, 00:12:53.052 "data_offset": 2048, 00:12:53.052 "data_size": 63488 00:12:53.052 }, 00:12:53.052 { 00:12:53.052 "name": "BaseBdev3", 00:12:53.052 "uuid": "0412cacc-481e-4654-8db7-969abf5a854c", 00:12:53.052 "is_configured": true, 00:12:53.052 "data_offset": 2048, 00:12:53.052 "data_size": 63488 00:12:53.052 } 00:12:53.052 ] 00:12:53.052 }' 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.052 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.620 [2024-11-26 06:22:37.524552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:53.620 "name": "Existed_Raid", 00:12:53.620 "aliases": [ 00:12:53.620 "2e52d7c5-2e00-4cdc-aefb-fd51205d1240" 00:12:53.620 ], 00:12:53.620 "product_name": "Raid Volume", 00:12:53.620 "block_size": 512, 00:12:53.620 "num_blocks": 63488, 00:12:53.620 "uuid": "2e52d7c5-2e00-4cdc-aefb-fd51205d1240", 00:12:53.620 "assigned_rate_limits": { 00:12:53.620 "rw_ios_per_sec": 0, 00:12:53.620 "rw_mbytes_per_sec": 0, 00:12:53.620 "r_mbytes_per_sec": 0, 00:12:53.620 "w_mbytes_per_sec": 0 00:12:53.620 }, 00:12:53.620 "claimed": false, 00:12:53.620 "zoned": false, 00:12:53.620 "supported_io_types": { 00:12:53.620 "read": true, 00:12:53.620 "write": true, 00:12:53.620 "unmap": false, 00:12:53.620 "flush": false, 00:12:53.620 "reset": true, 00:12:53.620 "nvme_admin": false, 00:12:53.620 "nvme_io": false, 00:12:53.620 "nvme_io_md": false, 00:12:53.620 "write_zeroes": true, 00:12:53.620 "zcopy": false, 00:12:53.620 "get_zone_info": false, 00:12:53.620 "zone_management": false, 00:12:53.620 "zone_append": false, 00:12:53.620 "compare": false, 00:12:53.620 "compare_and_write": false, 00:12:53.620 "abort": false, 00:12:53.620 "seek_hole": false, 00:12:53.620 "seek_data": false, 00:12:53.620 "copy": false, 00:12:53.620 "nvme_iov_md": false 00:12:53.620 }, 00:12:53.620 "memory_domains": [ 00:12:53.620 { 00:12:53.620 "dma_device_id": "system", 00:12:53.620 "dma_device_type": 1 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.620 "dma_device_type": 2 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "dma_device_id": "system", 00:12:53.620 "dma_device_type": 1 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.620 "dma_device_type": 2 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "dma_device_id": "system", 00:12:53.620 "dma_device_type": 1 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.620 "dma_device_type": 2 00:12:53.620 } 00:12:53.620 ], 00:12:53.620 "driver_specific": { 00:12:53.620 "raid": { 00:12:53.620 "uuid": "2e52d7c5-2e00-4cdc-aefb-fd51205d1240", 00:12:53.620 "strip_size_kb": 0, 00:12:53.620 "state": "online", 00:12:53.620 "raid_level": "raid1", 00:12:53.620 "superblock": true, 00:12:53.620 "num_base_bdevs": 3, 00:12:53.620 "num_base_bdevs_discovered": 3, 00:12:53.620 "num_base_bdevs_operational": 3, 00:12:53.620 "base_bdevs_list": [ 00:12:53.620 { 00:12:53.620 "name": "BaseBdev1", 00:12:53.620 "uuid": "8a9ba7c1-494f-4643-bd3f-ca17d8e1cc06", 00:12:53.620 "is_configured": true, 00:12:53.620 "data_offset": 2048, 00:12:53.620 "data_size": 63488 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "name": "BaseBdev2", 00:12:53.620 "uuid": "76cf3414-77bc-44a5-b2b3-f03ee4b76618", 00:12:53.620 "is_configured": true, 00:12:53.620 "data_offset": 2048, 00:12:53.620 "data_size": 63488 00:12:53.620 }, 00:12:53.620 { 00:12:53.620 "name": "BaseBdev3", 00:12:53.620 "uuid": "0412cacc-481e-4654-8db7-969abf5a854c", 00:12:53.620 "is_configured": true, 00:12:53.620 "data_offset": 2048, 00:12:53.620 "data_size": 63488 00:12:53.620 } 00:12:53.620 ] 00:12:53.620 } 00:12:53.620 } 00:12:53.620 }' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:53.620 BaseBdev2 00:12:53.620 BaseBdev3' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.620 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:53.879 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.880 [2024-11-26 06:22:37.803960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.880 "name": "Existed_Raid", 00:12:53.880 "uuid": "2e52d7c5-2e00-4cdc-aefb-fd51205d1240", 00:12:53.880 "strip_size_kb": 0, 00:12:53.880 "state": "online", 00:12:53.880 "raid_level": "raid1", 00:12:53.880 "superblock": true, 00:12:53.880 "num_base_bdevs": 3, 00:12:53.880 "num_base_bdevs_discovered": 2, 00:12:53.880 "num_base_bdevs_operational": 2, 00:12:53.880 "base_bdevs_list": [ 00:12:53.880 { 00:12:53.880 "name": null, 00:12:53.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.880 "is_configured": false, 00:12:53.880 "data_offset": 0, 00:12:53.880 "data_size": 63488 00:12:53.880 }, 00:12:53.880 { 00:12:53.880 "name": "BaseBdev2", 00:12:53.880 "uuid": "76cf3414-77bc-44a5-b2b3-f03ee4b76618", 00:12:53.880 "is_configured": true, 00:12:53.880 "data_offset": 2048, 00:12:53.880 "data_size": 63488 00:12:53.880 }, 00:12:53.880 { 00:12:53.880 "name": "BaseBdev3", 00:12:53.880 "uuid": "0412cacc-481e-4654-8db7-969abf5a854c", 00:12:53.880 "is_configured": true, 00:12:53.880 "data_offset": 2048, 00:12:53.880 "data_size": 63488 00:12:53.880 } 00:12:53.880 ] 00:12:53.880 }' 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.880 06:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 [2024-11-26 06:22:38.428296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.449 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.708 [2024-11-26 06:22:38.600104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:54.708 [2024-11-26 06:22:38.600274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.708 [2024-11-26 06:22:38.719124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.708 [2024-11-26 06:22:38.719214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.708 [2024-11-26 06:22:38.719230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.708 BaseBdev2 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.708 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.966 [ 00:12:54.966 { 00:12:54.966 "name": "BaseBdev2", 00:12:54.966 "aliases": [ 00:12:54.966 "9341893b-c1ff-422b-b8b8-43affe328707" 00:12:54.966 ], 00:12:54.966 "product_name": "Malloc disk", 00:12:54.966 "block_size": 512, 00:12:54.966 "num_blocks": 65536, 00:12:54.966 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:54.966 "assigned_rate_limits": { 00:12:54.966 "rw_ios_per_sec": 0, 00:12:54.966 "rw_mbytes_per_sec": 0, 00:12:54.966 "r_mbytes_per_sec": 0, 00:12:54.966 "w_mbytes_per_sec": 0 00:12:54.966 }, 00:12:54.966 "claimed": false, 00:12:54.966 "zoned": false, 00:12:54.966 "supported_io_types": { 00:12:54.966 "read": true, 00:12:54.966 "write": true, 00:12:54.966 "unmap": true, 00:12:54.966 "flush": true, 00:12:54.966 "reset": true, 00:12:54.966 "nvme_admin": false, 00:12:54.966 "nvme_io": false, 00:12:54.966 "nvme_io_md": false, 00:12:54.966 "write_zeroes": true, 00:12:54.966 "zcopy": true, 00:12:54.966 "get_zone_info": false, 00:12:54.966 "zone_management": false, 00:12:54.966 "zone_append": false, 00:12:54.966 "compare": false, 00:12:54.966 "compare_and_write": false, 00:12:54.966 "abort": true, 00:12:54.966 "seek_hole": false, 00:12:54.966 "seek_data": false, 00:12:54.966 "copy": true, 00:12:54.966 "nvme_iov_md": false 00:12:54.966 }, 00:12:54.966 "memory_domains": [ 00:12:54.966 { 00:12:54.966 "dma_device_id": "system", 00:12:54.966 "dma_device_type": 1 00:12:54.966 }, 00:12:54.966 { 00:12:54.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.966 "dma_device_type": 2 00:12:54.966 } 00:12:54.966 ], 00:12:54.966 "driver_specific": {} 00:12:54.966 } 00:12:54.966 ] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.966 BaseBdev3 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.966 [ 00:12:54.966 { 00:12:54.966 "name": "BaseBdev3", 00:12:54.966 "aliases": [ 00:12:54.966 "28cab401-972d-4d80-a075-e8ce81d97510" 00:12:54.966 ], 00:12:54.966 "product_name": "Malloc disk", 00:12:54.966 "block_size": 512, 00:12:54.966 "num_blocks": 65536, 00:12:54.966 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:54.966 "assigned_rate_limits": { 00:12:54.966 "rw_ios_per_sec": 0, 00:12:54.966 "rw_mbytes_per_sec": 0, 00:12:54.966 "r_mbytes_per_sec": 0, 00:12:54.966 "w_mbytes_per_sec": 0 00:12:54.966 }, 00:12:54.966 "claimed": false, 00:12:54.966 "zoned": false, 00:12:54.966 "supported_io_types": { 00:12:54.966 "read": true, 00:12:54.966 "write": true, 00:12:54.966 "unmap": true, 00:12:54.966 "flush": true, 00:12:54.966 "reset": true, 00:12:54.966 "nvme_admin": false, 00:12:54.966 "nvme_io": false, 00:12:54.966 "nvme_io_md": false, 00:12:54.966 "write_zeroes": true, 00:12:54.966 "zcopy": true, 00:12:54.966 "get_zone_info": false, 00:12:54.966 "zone_management": false, 00:12:54.966 "zone_append": false, 00:12:54.966 "compare": false, 00:12:54.966 "compare_and_write": false, 00:12:54.966 "abort": true, 00:12:54.966 "seek_hole": false, 00:12:54.966 "seek_data": false, 00:12:54.966 "copy": true, 00:12:54.966 "nvme_iov_md": false 00:12:54.966 }, 00:12:54.966 "memory_domains": [ 00:12:54.966 { 00:12:54.966 "dma_device_id": "system", 00:12:54.966 "dma_device_type": 1 00:12:54.966 }, 00:12:54.966 { 00:12:54.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.966 "dma_device_type": 2 00:12:54.966 } 00:12:54.966 ], 00:12:54.966 "driver_specific": {} 00:12:54.966 } 00:12:54.966 ] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.966 [2024-11-26 06:22:38.957346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:54.966 [2024-11-26 06:22:38.957448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:54.966 [2024-11-26 06:22:38.957500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.966 [2024-11-26 06:22:38.959968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.966 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.967 06:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.967 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.967 "name": "Existed_Raid", 00:12:54.967 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:54.967 "strip_size_kb": 0, 00:12:54.967 "state": "configuring", 00:12:54.967 "raid_level": "raid1", 00:12:54.967 "superblock": true, 00:12:54.967 "num_base_bdevs": 3, 00:12:54.967 "num_base_bdevs_discovered": 2, 00:12:54.967 "num_base_bdevs_operational": 3, 00:12:54.967 "base_bdevs_list": [ 00:12:54.967 { 00:12:54.967 "name": "BaseBdev1", 00:12:54.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.967 "is_configured": false, 00:12:54.967 "data_offset": 0, 00:12:54.967 "data_size": 0 00:12:54.967 }, 00:12:54.967 { 00:12:54.967 "name": "BaseBdev2", 00:12:54.967 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:54.967 "is_configured": true, 00:12:54.967 "data_offset": 2048, 00:12:54.967 "data_size": 63488 00:12:54.967 }, 00:12:54.967 { 00:12:54.967 "name": "BaseBdev3", 00:12:54.967 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:54.967 "is_configured": true, 00:12:54.967 "data_offset": 2048, 00:12:54.967 "data_size": 63488 00:12:54.967 } 00:12:54.967 ] 00:12:54.967 }' 00:12:54.967 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.967 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.535 [2024-11-26 06:22:39.464546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.535 "name": "Existed_Raid", 00:12:55.535 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:55.535 "strip_size_kb": 0, 00:12:55.535 "state": "configuring", 00:12:55.535 "raid_level": "raid1", 00:12:55.535 "superblock": true, 00:12:55.535 "num_base_bdevs": 3, 00:12:55.535 "num_base_bdevs_discovered": 1, 00:12:55.535 "num_base_bdevs_operational": 3, 00:12:55.535 "base_bdevs_list": [ 00:12:55.535 { 00:12:55.535 "name": "BaseBdev1", 00:12:55.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.535 "is_configured": false, 00:12:55.535 "data_offset": 0, 00:12:55.535 "data_size": 0 00:12:55.535 }, 00:12:55.535 { 00:12:55.535 "name": null, 00:12:55.535 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:55.535 "is_configured": false, 00:12:55.535 "data_offset": 0, 00:12:55.535 "data_size": 63488 00:12:55.535 }, 00:12:55.535 { 00:12:55.535 "name": "BaseBdev3", 00:12:55.535 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:55.535 "is_configured": true, 00:12:55.535 "data_offset": 2048, 00:12:55.535 "data_size": 63488 00:12:55.535 } 00:12:55.535 ] 00:12:55.535 }' 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.535 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.106 06:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.106 [2024-11-26 06:22:40.045632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.106 BaseBdev1 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.106 [ 00:12:56.106 { 00:12:56.106 "name": "BaseBdev1", 00:12:56.106 "aliases": [ 00:12:56.106 "2af555bf-b584-4241-8d01-e0a0245c8daf" 00:12:56.106 ], 00:12:56.106 "product_name": "Malloc disk", 00:12:56.106 "block_size": 512, 00:12:56.106 "num_blocks": 65536, 00:12:56.106 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:56.106 "assigned_rate_limits": { 00:12:56.106 "rw_ios_per_sec": 0, 00:12:56.106 "rw_mbytes_per_sec": 0, 00:12:56.106 "r_mbytes_per_sec": 0, 00:12:56.106 "w_mbytes_per_sec": 0 00:12:56.106 }, 00:12:56.106 "claimed": true, 00:12:56.106 "claim_type": "exclusive_write", 00:12:56.106 "zoned": false, 00:12:56.106 "supported_io_types": { 00:12:56.106 "read": true, 00:12:56.106 "write": true, 00:12:56.106 "unmap": true, 00:12:56.106 "flush": true, 00:12:56.106 "reset": true, 00:12:56.106 "nvme_admin": false, 00:12:56.106 "nvme_io": false, 00:12:56.106 "nvme_io_md": false, 00:12:56.106 "write_zeroes": true, 00:12:56.106 "zcopy": true, 00:12:56.106 "get_zone_info": false, 00:12:56.106 "zone_management": false, 00:12:56.106 "zone_append": false, 00:12:56.106 "compare": false, 00:12:56.106 "compare_and_write": false, 00:12:56.106 "abort": true, 00:12:56.106 "seek_hole": false, 00:12:56.106 "seek_data": false, 00:12:56.106 "copy": true, 00:12:56.106 "nvme_iov_md": false 00:12:56.106 }, 00:12:56.106 "memory_domains": [ 00:12:56.106 { 00:12:56.106 "dma_device_id": "system", 00:12:56.106 "dma_device_type": 1 00:12:56.106 }, 00:12:56.106 { 00:12:56.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.106 "dma_device_type": 2 00:12:56.106 } 00:12:56.106 ], 00:12:56.106 "driver_specific": {} 00:12:56.106 } 00:12:56.106 ] 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.106 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.106 "name": "Existed_Raid", 00:12:56.106 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:56.106 "strip_size_kb": 0, 00:12:56.106 "state": "configuring", 00:12:56.106 "raid_level": "raid1", 00:12:56.106 "superblock": true, 00:12:56.106 "num_base_bdevs": 3, 00:12:56.106 "num_base_bdevs_discovered": 2, 00:12:56.106 "num_base_bdevs_operational": 3, 00:12:56.106 "base_bdevs_list": [ 00:12:56.106 { 00:12:56.106 "name": "BaseBdev1", 00:12:56.106 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:56.106 "is_configured": true, 00:12:56.106 "data_offset": 2048, 00:12:56.106 "data_size": 63488 00:12:56.106 }, 00:12:56.106 { 00:12:56.106 "name": null, 00:12:56.106 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:56.106 "is_configured": false, 00:12:56.106 "data_offset": 0, 00:12:56.106 "data_size": 63488 00:12:56.107 }, 00:12:56.107 { 00:12:56.107 "name": "BaseBdev3", 00:12:56.107 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:56.107 "is_configured": true, 00:12:56.107 "data_offset": 2048, 00:12:56.107 "data_size": 63488 00:12:56.107 } 00:12:56.107 ] 00:12:56.107 }' 00:12:56.107 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.107 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.675 [2024-11-26 06:22:40.580818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.675 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.676 "name": "Existed_Raid", 00:12:56.676 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:56.676 "strip_size_kb": 0, 00:12:56.676 "state": "configuring", 00:12:56.676 "raid_level": "raid1", 00:12:56.676 "superblock": true, 00:12:56.676 "num_base_bdevs": 3, 00:12:56.676 "num_base_bdevs_discovered": 1, 00:12:56.676 "num_base_bdevs_operational": 3, 00:12:56.676 "base_bdevs_list": [ 00:12:56.676 { 00:12:56.676 "name": "BaseBdev1", 00:12:56.676 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:56.676 "is_configured": true, 00:12:56.676 "data_offset": 2048, 00:12:56.676 "data_size": 63488 00:12:56.676 }, 00:12:56.676 { 00:12:56.676 "name": null, 00:12:56.676 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:56.676 "is_configured": false, 00:12:56.676 "data_offset": 0, 00:12:56.676 "data_size": 63488 00:12:56.676 }, 00:12:56.676 { 00:12:56.676 "name": null, 00:12:56.676 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:56.676 "is_configured": false, 00:12:56.676 "data_offset": 0, 00:12:56.676 "data_size": 63488 00:12:56.676 } 00:12:56.676 ] 00:12:56.676 }' 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.676 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.935 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.935 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.935 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.935 06:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:56.935 06:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.935 [2024-11-26 06:22:41.012180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.935 "name": "Existed_Raid", 00:12:56.935 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:56.935 "strip_size_kb": 0, 00:12:56.935 "state": "configuring", 00:12:56.935 "raid_level": "raid1", 00:12:56.935 "superblock": true, 00:12:56.935 "num_base_bdevs": 3, 00:12:56.935 "num_base_bdevs_discovered": 2, 00:12:56.935 "num_base_bdevs_operational": 3, 00:12:56.935 "base_bdevs_list": [ 00:12:56.935 { 00:12:56.935 "name": "BaseBdev1", 00:12:56.935 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:56.935 "is_configured": true, 00:12:56.935 "data_offset": 2048, 00:12:56.935 "data_size": 63488 00:12:56.935 }, 00:12:56.935 { 00:12:56.935 "name": null, 00:12:56.935 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:56.935 "is_configured": false, 00:12:56.935 "data_offset": 0, 00:12:56.935 "data_size": 63488 00:12:56.935 }, 00:12:56.935 { 00:12:56.935 "name": "BaseBdev3", 00:12:56.935 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:56.935 "is_configured": true, 00:12:56.935 "data_offset": 2048, 00:12:56.935 "data_size": 63488 00:12:56.935 } 00:12:56.935 ] 00:12:56.935 }' 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.935 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.537 [2024-11-26 06:22:41.507365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.537 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.795 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.795 "name": "Existed_Raid", 00:12:57.795 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:57.795 "strip_size_kb": 0, 00:12:57.795 "state": "configuring", 00:12:57.795 "raid_level": "raid1", 00:12:57.795 "superblock": true, 00:12:57.795 "num_base_bdevs": 3, 00:12:57.795 "num_base_bdevs_discovered": 1, 00:12:57.795 "num_base_bdevs_operational": 3, 00:12:57.795 "base_bdevs_list": [ 00:12:57.795 { 00:12:57.795 "name": null, 00:12:57.795 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:57.795 "is_configured": false, 00:12:57.795 "data_offset": 0, 00:12:57.795 "data_size": 63488 00:12:57.795 }, 00:12:57.795 { 00:12:57.795 "name": null, 00:12:57.795 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:57.795 "is_configured": false, 00:12:57.795 "data_offset": 0, 00:12:57.795 "data_size": 63488 00:12:57.795 }, 00:12:57.795 { 00:12:57.795 "name": "BaseBdev3", 00:12:57.795 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:57.795 "is_configured": true, 00:12:57.795 "data_offset": 2048, 00:12:57.795 "data_size": 63488 00:12:57.795 } 00:12:57.795 ] 00:12:57.795 }' 00:12:57.795 06:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.795 06:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.052 [2024-11-26 06:22:42.112814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.052 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.052 "name": "Existed_Raid", 00:12:58.052 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:58.052 "strip_size_kb": 0, 00:12:58.052 "state": "configuring", 00:12:58.052 "raid_level": "raid1", 00:12:58.052 "superblock": true, 00:12:58.052 "num_base_bdevs": 3, 00:12:58.052 "num_base_bdevs_discovered": 2, 00:12:58.053 "num_base_bdevs_operational": 3, 00:12:58.053 "base_bdevs_list": [ 00:12:58.053 { 00:12:58.053 "name": null, 00:12:58.053 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:58.053 "is_configured": false, 00:12:58.053 "data_offset": 0, 00:12:58.053 "data_size": 63488 00:12:58.053 }, 00:12:58.053 { 00:12:58.053 "name": "BaseBdev2", 00:12:58.053 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:58.053 "is_configured": true, 00:12:58.053 "data_offset": 2048, 00:12:58.053 "data_size": 63488 00:12:58.053 }, 00:12:58.053 { 00:12:58.053 "name": "BaseBdev3", 00:12:58.053 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:58.053 "is_configured": true, 00:12:58.053 "data_offset": 2048, 00:12:58.053 "data_size": 63488 00:12:58.053 } 00:12:58.053 ] 00:12:58.053 }' 00:12:58.053 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.053 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2af555bf-b584-4241-8d01-e0a0245c8daf 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 [2024-11-26 06:22:42.685231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:58.619 [2024-11-26 06:22:42.685546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:58.619 [2024-11-26 06:22:42.685561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.619 [2024-11-26 06:22:42.685912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:58.619 NewBaseBdev 00:12:58.619 [2024-11-26 06:22:42.686134] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:58.619 [2024-11-26 06:22:42.686151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:58.619 [2024-11-26 06:22:42.686350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.619 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.619 [ 00:12:58.619 { 00:12:58.619 "name": "NewBaseBdev", 00:12:58.619 "aliases": [ 00:12:58.619 "2af555bf-b584-4241-8d01-e0a0245c8daf" 00:12:58.619 ], 00:12:58.619 "product_name": "Malloc disk", 00:12:58.619 "block_size": 512, 00:12:58.619 "num_blocks": 65536, 00:12:58.619 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:58.619 "assigned_rate_limits": { 00:12:58.619 "rw_ios_per_sec": 0, 00:12:58.619 "rw_mbytes_per_sec": 0, 00:12:58.619 "r_mbytes_per_sec": 0, 00:12:58.619 "w_mbytes_per_sec": 0 00:12:58.619 }, 00:12:58.619 "claimed": true, 00:12:58.619 "claim_type": "exclusive_write", 00:12:58.619 "zoned": false, 00:12:58.619 "supported_io_types": { 00:12:58.619 "read": true, 00:12:58.619 "write": true, 00:12:58.619 "unmap": true, 00:12:58.619 "flush": true, 00:12:58.619 "reset": true, 00:12:58.619 "nvme_admin": false, 00:12:58.619 "nvme_io": false, 00:12:58.619 "nvme_io_md": false, 00:12:58.619 "write_zeroes": true, 00:12:58.619 "zcopy": true, 00:12:58.619 "get_zone_info": false, 00:12:58.619 "zone_management": false, 00:12:58.619 "zone_append": false, 00:12:58.619 "compare": false, 00:12:58.619 "compare_and_write": false, 00:12:58.619 "abort": true, 00:12:58.619 "seek_hole": false, 00:12:58.619 "seek_data": false, 00:12:58.619 "copy": true, 00:12:58.619 "nvme_iov_md": false 00:12:58.619 }, 00:12:58.619 "memory_domains": [ 00:12:58.619 { 00:12:58.619 "dma_device_id": "system", 00:12:58.619 "dma_device_type": 1 00:12:58.619 }, 00:12:58.619 { 00:12:58.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.619 "dma_device_type": 2 00:12:58.619 } 00:12:58.619 ], 00:12:58.620 "driver_specific": {} 00:12:58.620 } 00:12:58.620 ] 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.620 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.878 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.878 "name": "Existed_Raid", 00:12:58.878 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:58.878 "strip_size_kb": 0, 00:12:58.878 "state": "online", 00:12:58.878 "raid_level": "raid1", 00:12:58.878 "superblock": true, 00:12:58.878 "num_base_bdevs": 3, 00:12:58.878 "num_base_bdevs_discovered": 3, 00:12:58.878 "num_base_bdevs_operational": 3, 00:12:58.878 "base_bdevs_list": [ 00:12:58.878 { 00:12:58.878 "name": "NewBaseBdev", 00:12:58.878 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:58.878 "is_configured": true, 00:12:58.878 "data_offset": 2048, 00:12:58.878 "data_size": 63488 00:12:58.878 }, 00:12:58.878 { 00:12:58.878 "name": "BaseBdev2", 00:12:58.878 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:58.878 "is_configured": true, 00:12:58.878 "data_offset": 2048, 00:12:58.878 "data_size": 63488 00:12:58.878 }, 00:12:58.878 { 00:12:58.878 "name": "BaseBdev3", 00:12:58.878 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:58.878 "is_configured": true, 00:12:58.878 "data_offset": 2048, 00:12:58.878 "data_size": 63488 00:12:58.878 } 00:12:58.878 ] 00:12:58.878 }' 00:12:58.878 06:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.878 06:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.138 [2024-11-26 06:22:43.156875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.138 "name": "Existed_Raid", 00:12:59.138 "aliases": [ 00:12:59.138 "ef3eab8c-cf00-4c40-890d-0a4ef77f17de" 00:12:59.138 ], 00:12:59.138 "product_name": "Raid Volume", 00:12:59.138 "block_size": 512, 00:12:59.138 "num_blocks": 63488, 00:12:59.138 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:59.138 "assigned_rate_limits": { 00:12:59.138 "rw_ios_per_sec": 0, 00:12:59.138 "rw_mbytes_per_sec": 0, 00:12:59.138 "r_mbytes_per_sec": 0, 00:12:59.138 "w_mbytes_per_sec": 0 00:12:59.138 }, 00:12:59.138 "claimed": false, 00:12:59.138 "zoned": false, 00:12:59.138 "supported_io_types": { 00:12:59.138 "read": true, 00:12:59.138 "write": true, 00:12:59.138 "unmap": false, 00:12:59.138 "flush": false, 00:12:59.138 "reset": true, 00:12:59.138 "nvme_admin": false, 00:12:59.138 "nvme_io": false, 00:12:59.138 "nvme_io_md": false, 00:12:59.138 "write_zeroes": true, 00:12:59.138 "zcopy": false, 00:12:59.138 "get_zone_info": false, 00:12:59.138 "zone_management": false, 00:12:59.138 "zone_append": false, 00:12:59.138 "compare": false, 00:12:59.138 "compare_and_write": false, 00:12:59.138 "abort": false, 00:12:59.138 "seek_hole": false, 00:12:59.138 "seek_data": false, 00:12:59.138 "copy": false, 00:12:59.138 "nvme_iov_md": false 00:12:59.138 }, 00:12:59.138 "memory_domains": [ 00:12:59.138 { 00:12:59.138 "dma_device_id": "system", 00:12:59.138 "dma_device_type": 1 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.138 "dma_device_type": 2 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "dma_device_id": "system", 00:12:59.138 "dma_device_type": 1 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.138 "dma_device_type": 2 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "dma_device_id": "system", 00:12:59.138 "dma_device_type": 1 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.138 "dma_device_type": 2 00:12:59.138 } 00:12:59.138 ], 00:12:59.138 "driver_specific": { 00:12:59.138 "raid": { 00:12:59.138 "uuid": "ef3eab8c-cf00-4c40-890d-0a4ef77f17de", 00:12:59.138 "strip_size_kb": 0, 00:12:59.138 "state": "online", 00:12:59.138 "raid_level": "raid1", 00:12:59.138 "superblock": true, 00:12:59.138 "num_base_bdevs": 3, 00:12:59.138 "num_base_bdevs_discovered": 3, 00:12:59.138 "num_base_bdevs_operational": 3, 00:12:59.138 "base_bdevs_list": [ 00:12:59.138 { 00:12:59.138 "name": "NewBaseBdev", 00:12:59.138 "uuid": "2af555bf-b584-4241-8d01-e0a0245c8daf", 00:12:59.138 "is_configured": true, 00:12:59.138 "data_offset": 2048, 00:12:59.138 "data_size": 63488 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "name": "BaseBdev2", 00:12:59.138 "uuid": "9341893b-c1ff-422b-b8b8-43affe328707", 00:12:59.138 "is_configured": true, 00:12:59.138 "data_offset": 2048, 00:12:59.138 "data_size": 63488 00:12:59.138 }, 00:12:59.138 { 00:12:59.138 "name": "BaseBdev3", 00:12:59.138 "uuid": "28cab401-972d-4d80-a075-e8ce81d97510", 00:12:59.138 "is_configured": true, 00:12:59.138 "data_offset": 2048, 00:12:59.138 "data_size": 63488 00:12:59.138 } 00:12:59.138 ] 00:12:59.138 } 00:12:59.138 } 00:12:59.138 }' 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:59.138 BaseBdev2 00:12:59.138 BaseBdev3' 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:59.138 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.139 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.139 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.398 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.399 [2024-11-26 06:22:43.412102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.399 [2024-11-26 06:22:43.412143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.399 [2024-11-26 06:22:43.412245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.399 [2024-11-26 06:22:43.412575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.399 [2024-11-26 06:22:43.412587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68456 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68456 ']' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68456 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68456 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68456' 00:12:59.399 killing process with pid 68456 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68456 00:12:59.399 [2024-11-26 06:22:43.455456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.399 06:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68456 00:12:59.968 [2024-11-26 06:22:43.795031] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.349 06:22:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:01.349 00:13:01.349 real 0m11.218s 00:13:01.349 user 0m17.378s 00:13:01.349 sys 0m2.201s 00:13:01.349 06:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.349 06:22:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.349 ************************************ 00:13:01.349 END TEST raid_state_function_test_sb 00:13:01.349 ************************************ 00:13:01.349 06:22:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:01.349 06:22:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:01.349 06:22:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.349 06:22:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.349 ************************************ 00:13:01.349 START TEST raid_superblock_test 00:13:01.349 ************************************ 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69081 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69081 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69081 ']' 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.349 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.350 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.350 06:22:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.350 [2024-11-26 06:22:45.276920] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:01.350 [2024-11-26 06:22:45.277117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69081 ] 00:13:01.350 [2024-11-26 06:22:45.442350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.610 [2024-11-26 06:22:45.593455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.874 [2024-11-26 06:22:45.849229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.874 [2024-11-26 06:22:45.849316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 malloc1 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 [2024-11-26 06:22:46.260580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:02.135 [2024-11-26 06:22:46.260706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.135 [2024-11-26 06:22:46.260803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:02.135 [2024-11-26 06:22:46.260844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.135 [2024-11-26 06:22:46.263639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.135 [2024-11-26 06:22:46.263738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:02.135 pt1 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:02.135 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 malloc2 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 [2024-11-26 06:22:46.331800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:02.395 [2024-11-26 06:22:46.331939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.395 [2024-11-26 06:22:46.331972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:02.395 [2024-11-26 06:22:46.331983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.395 [2024-11-26 06:22:46.334705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.395 [2024-11-26 06:22:46.334748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:02.395 pt2 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 malloc3 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.395 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 [2024-11-26 06:22:46.415588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:02.395 [2024-11-26 06:22:46.415729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.395 [2024-11-26 06:22:46.415780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:02.395 [2024-11-26 06:22:46.415824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.395 [2024-11-26 06:22:46.418774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.396 [2024-11-26 06:22:46.418857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:02.396 pt3 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.396 [2024-11-26 06:22:46.427777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:02.396 [2024-11-26 06:22:46.430223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:02.396 [2024-11-26 06:22:46.430344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:02.396 [2024-11-26 06:22:46.430582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:02.396 [2024-11-26 06:22:46.430642] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:02.396 [2024-11-26 06:22:46.431001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:02.396 [2024-11-26 06:22:46.431291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:02.396 [2024-11-26 06:22:46.431346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:02.396 [2024-11-26 06:22:46.431651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.396 "name": "raid_bdev1", 00:13:02.396 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:02.396 "strip_size_kb": 0, 00:13:02.396 "state": "online", 00:13:02.396 "raid_level": "raid1", 00:13:02.396 "superblock": true, 00:13:02.396 "num_base_bdevs": 3, 00:13:02.396 "num_base_bdevs_discovered": 3, 00:13:02.396 "num_base_bdevs_operational": 3, 00:13:02.396 "base_bdevs_list": [ 00:13:02.396 { 00:13:02.396 "name": "pt1", 00:13:02.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:02.396 "is_configured": true, 00:13:02.396 "data_offset": 2048, 00:13:02.396 "data_size": 63488 00:13:02.396 }, 00:13:02.396 { 00:13:02.396 "name": "pt2", 00:13:02.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.396 "is_configured": true, 00:13:02.396 "data_offset": 2048, 00:13:02.396 "data_size": 63488 00:13:02.396 }, 00:13:02.396 { 00:13:02.396 "name": "pt3", 00:13:02.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.396 "is_configured": true, 00:13:02.396 "data_offset": 2048, 00:13:02.396 "data_size": 63488 00:13:02.396 } 00:13:02.396 ] 00:13:02.396 }' 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.396 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.976 [2024-11-26 06:22:46.923362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:02.976 "name": "raid_bdev1", 00:13:02.976 "aliases": [ 00:13:02.976 "22688430-e2c9-41a2-b471-f362646e0a14" 00:13:02.976 ], 00:13:02.976 "product_name": "Raid Volume", 00:13:02.976 "block_size": 512, 00:13:02.976 "num_blocks": 63488, 00:13:02.976 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:02.976 "assigned_rate_limits": { 00:13:02.976 "rw_ios_per_sec": 0, 00:13:02.976 "rw_mbytes_per_sec": 0, 00:13:02.976 "r_mbytes_per_sec": 0, 00:13:02.976 "w_mbytes_per_sec": 0 00:13:02.976 }, 00:13:02.976 "claimed": false, 00:13:02.976 "zoned": false, 00:13:02.976 "supported_io_types": { 00:13:02.976 "read": true, 00:13:02.976 "write": true, 00:13:02.976 "unmap": false, 00:13:02.976 "flush": false, 00:13:02.976 "reset": true, 00:13:02.976 "nvme_admin": false, 00:13:02.976 "nvme_io": false, 00:13:02.976 "nvme_io_md": false, 00:13:02.976 "write_zeroes": true, 00:13:02.976 "zcopy": false, 00:13:02.976 "get_zone_info": false, 00:13:02.976 "zone_management": false, 00:13:02.976 "zone_append": false, 00:13:02.976 "compare": false, 00:13:02.976 "compare_and_write": false, 00:13:02.976 "abort": false, 00:13:02.976 "seek_hole": false, 00:13:02.976 "seek_data": false, 00:13:02.976 "copy": false, 00:13:02.976 "nvme_iov_md": false 00:13:02.976 }, 00:13:02.976 "memory_domains": [ 00:13:02.976 { 00:13:02.976 "dma_device_id": "system", 00:13:02.976 "dma_device_type": 1 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.976 "dma_device_type": 2 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "dma_device_id": "system", 00:13:02.976 "dma_device_type": 1 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.976 "dma_device_type": 2 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "dma_device_id": "system", 00:13:02.976 "dma_device_type": 1 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.976 "dma_device_type": 2 00:13:02.976 } 00:13:02.976 ], 00:13:02.976 "driver_specific": { 00:13:02.976 "raid": { 00:13:02.976 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:02.976 "strip_size_kb": 0, 00:13:02.976 "state": "online", 00:13:02.976 "raid_level": "raid1", 00:13:02.976 "superblock": true, 00:13:02.976 "num_base_bdevs": 3, 00:13:02.976 "num_base_bdevs_discovered": 3, 00:13:02.976 "num_base_bdevs_operational": 3, 00:13:02.976 "base_bdevs_list": [ 00:13:02.976 { 00:13:02.976 "name": "pt1", 00:13:02.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:02.976 "is_configured": true, 00:13:02.976 "data_offset": 2048, 00:13:02.976 "data_size": 63488 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "name": "pt2", 00:13:02.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:02.976 "is_configured": true, 00:13:02.976 "data_offset": 2048, 00:13:02.976 "data_size": 63488 00:13:02.976 }, 00:13:02.976 { 00:13:02.976 "name": "pt3", 00:13:02.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:02.976 "is_configured": true, 00:13:02.976 "data_offset": 2048, 00:13:02.976 "data_size": 63488 00:13:02.976 } 00:13:02.976 ] 00:13:02.976 } 00:13:02.976 } 00:13:02.976 }' 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:02.976 pt2 00:13:02.976 pt3' 00:13:02.976 06:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.976 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 [2024-11-26 06:22:47.218930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=22688430-e2c9-41a2-b471-f362646e0a14 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 22688430-e2c9-41a2-b471-f362646e0a14 ']' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 [2024-11-26 06:22:47.254423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.261 [2024-11-26 06:22:47.254516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.261 [2024-11-26 06:22:47.254639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.261 [2024-11-26 06:22:47.254751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.261 [2024-11-26 06:22:47.254765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:03.261 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.262 [2024-11-26 06:22:47.382299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:03.262 [2024-11-26 06:22:47.384892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:03.262 [2024-11-26 06:22:47.384952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:03.262 [2024-11-26 06:22:47.385030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:03.262 [2024-11-26 06:22:47.385132] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:03.262 [2024-11-26 06:22:47.385178] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:03.262 [2024-11-26 06:22:47.385198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.262 [2024-11-26 06:22:47.385210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:03.262 request: 00:13:03.262 { 00:13:03.262 "name": "raid_bdev1", 00:13:03.262 "raid_level": "raid1", 00:13:03.262 "base_bdevs": [ 00:13:03.262 "malloc1", 00:13:03.262 "malloc2", 00:13:03.262 "malloc3" 00:13:03.262 ], 00:13:03.262 "superblock": false, 00:13:03.262 "method": "bdev_raid_create", 00:13:03.262 "req_id": 1 00:13:03.262 } 00:13:03.262 Got JSON-RPC error response 00:13:03.262 response: 00:13:03.262 { 00:13:03.262 "code": -17, 00:13:03.262 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:03.262 } 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.262 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.522 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.522 [2024-11-26 06:22:47.446137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:03.522 [2024-11-26 06:22:47.446295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.522 [2024-11-26 06:22:47.446352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:03.522 [2024-11-26 06:22:47.446402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.522 [2024-11-26 06:22:47.449471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.522 [2024-11-26 06:22:47.449554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:03.523 [2024-11-26 06:22:47.449710] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:03.523 [2024-11-26 06:22:47.449819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:03.523 pt1 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.523 "name": "raid_bdev1", 00:13:03.523 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:03.523 "strip_size_kb": 0, 00:13:03.523 "state": "configuring", 00:13:03.523 "raid_level": "raid1", 00:13:03.523 "superblock": true, 00:13:03.523 "num_base_bdevs": 3, 00:13:03.523 "num_base_bdevs_discovered": 1, 00:13:03.523 "num_base_bdevs_operational": 3, 00:13:03.523 "base_bdevs_list": [ 00:13:03.523 { 00:13:03.523 "name": "pt1", 00:13:03.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:03.523 "is_configured": true, 00:13:03.523 "data_offset": 2048, 00:13:03.523 "data_size": 63488 00:13:03.523 }, 00:13:03.523 { 00:13:03.523 "name": null, 00:13:03.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:03.523 "is_configured": false, 00:13:03.523 "data_offset": 2048, 00:13:03.523 "data_size": 63488 00:13:03.523 }, 00:13:03.523 { 00:13:03.523 "name": null, 00:13:03.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:03.523 "is_configured": false, 00:13:03.523 "data_offset": 2048, 00:13:03.523 "data_size": 63488 00:13:03.523 } 00:13:03.523 ] 00:13:03.523 }' 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.523 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.092 [2024-11-26 06:22:47.973288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.092 [2024-11-26 06:22:47.973375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.092 [2024-11-26 06:22:47.973405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:04.092 [2024-11-26 06:22:47.973415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.092 [2024-11-26 06:22:47.973995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.092 [2024-11-26 06:22:47.974015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.092 [2024-11-26 06:22:47.974140] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:04.092 [2024-11-26 06:22:47.974170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.092 pt2 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.092 [2024-11-26 06:22:47.985276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.092 06:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.092 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.092 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.092 "name": "raid_bdev1", 00:13:04.092 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:04.092 "strip_size_kb": 0, 00:13:04.092 "state": "configuring", 00:13:04.092 "raid_level": "raid1", 00:13:04.092 "superblock": true, 00:13:04.092 "num_base_bdevs": 3, 00:13:04.092 "num_base_bdevs_discovered": 1, 00:13:04.092 "num_base_bdevs_operational": 3, 00:13:04.092 "base_bdevs_list": [ 00:13:04.092 { 00:13:04.092 "name": "pt1", 00:13:04.092 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.092 "is_configured": true, 00:13:04.092 "data_offset": 2048, 00:13:04.092 "data_size": 63488 00:13:04.092 }, 00:13:04.092 { 00:13:04.092 "name": null, 00:13:04.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.092 "is_configured": false, 00:13:04.092 "data_offset": 0, 00:13:04.092 "data_size": 63488 00:13:04.092 }, 00:13:04.092 { 00:13:04.092 "name": null, 00:13:04.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.092 "is_configured": false, 00:13:04.092 "data_offset": 2048, 00:13:04.092 "data_size": 63488 00:13:04.092 } 00:13:04.092 ] 00:13:04.092 }' 00:13:04.092 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.092 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.351 [2024-11-26 06:22:48.444503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.351 [2024-11-26 06:22:48.444684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.351 [2024-11-26 06:22:48.444730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:04.351 [2024-11-26 06:22:48.444812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.351 [2024-11-26 06:22:48.445500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.351 [2024-11-26 06:22:48.445571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.351 [2024-11-26 06:22:48.445737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:04.351 [2024-11-26 06:22:48.445835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.351 pt2 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.351 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.351 [2024-11-26 06:22:48.456446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:04.351 [2024-11-26 06:22:48.456546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.351 [2024-11-26 06:22:48.456589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:04.352 [2024-11-26 06:22:48.456632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.352 [2024-11-26 06:22:48.457180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.352 [2024-11-26 06:22:48.457257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:04.352 [2024-11-26 06:22:48.457385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:04.352 [2024-11-26 06:22:48.457443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:04.352 [2024-11-26 06:22:48.457642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:04.352 [2024-11-26 06:22:48.457693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:04.352 [2024-11-26 06:22:48.458035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:04.352 [2024-11-26 06:22:48.458300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:04.352 [2024-11-26 06:22:48.458349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:04.352 [2024-11-26 06:22:48.458586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.352 pt3 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.352 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.610 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.610 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.610 "name": "raid_bdev1", 00:13:04.610 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:04.610 "strip_size_kb": 0, 00:13:04.610 "state": "online", 00:13:04.610 "raid_level": "raid1", 00:13:04.610 "superblock": true, 00:13:04.610 "num_base_bdevs": 3, 00:13:04.610 "num_base_bdevs_discovered": 3, 00:13:04.610 "num_base_bdevs_operational": 3, 00:13:04.610 "base_bdevs_list": [ 00:13:04.610 { 00:13:04.610 "name": "pt1", 00:13:04.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.610 "is_configured": true, 00:13:04.610 "data_offset": 2048, 00:13:04.610 "data_size": 63488 00:13:04.610 }, 00:13:04.610 { 00:13:04.610 "name": "pt2", 00:13:04.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.611 "is_configured": true, 00:13:04.611 "data_offset": 2048, 00:13:04.611 "data_size": 63488 00:13:04.611 }, 00:13:04.611 { 00:13:04.611 "name": "pt3", 00:13:04.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.611 "is_configured": true, 00:13:04.611 "data_offset": 2048, 00:13:04.611 "data_size": 63488 00:13:04.611 } 00:13:04.611 ] 00:13:04.611 }' 00:13:04.611 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.611 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.870 [2024-11-26 06:22:48.924178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:04.870 "name": "raid_bdev1", 00:13:04.870 "aliases": [ 00:13:04.870 "22688430-e2c9-41a2-b471-f362646e0a14" 00:13:04.870 ], 00:13:04.870 "product_name": "Raid Volume", 00:13:04.870 "block_size": 512, 00:13:04.870 "num_blocks": 63488, 00:13:04.870 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:04.870 "assigned_rate_limits": { 00:13:04.870 "rw_ios_per_sec": 0, 00:13:04.870 "rw_mbytes_per_sec": 0, 00:13:04.870 "r_mbytes_per_sec": 0, 00:13:04.870 "w_mbytes_per_sec": 0 00:13:04.870 }, 00:13:04.870 "claimed": false, 00:13:04.870 "zoned": false, 00:13:04.870 "supported_io_types": { 00:13:04.870 "read": true, 00:13:04.870 "write": true, 00:13:04.870 "unmap": false, 00:13:04.870 "flush": false, 00:13:04.870 "reset": true, 00:13:04.870 "nvme_admin": false, 00:13:04.870 "nvme_io": false, 00:13:04.870 "nvme_io_md": false, 00:13:04.870 "write_zeroes": true, 00:13:04.870 "zcopy": false, 00:13:04.870 "get_zone_info": false, 00:13:04.870 "zone_management": false, 00:13:04.870 "zone_append": false, 00:13:04.870 "compare": false, 00:13:04.870 "compare_and_write": false, 00:13:04.870 "abort": false, 00:13:04.870 "seek_hole": false, 00:13:04.870 "seek_data": false, 00:13:04.870 "copy": false, 00:13:04.870 "nvme_iov_md": false 00:13:04.870 }, 00:13:04.870 "memory_domains": [ 00:13:04.870 { 00:13:04.870 "dma_device_id": "system", 00:13:04.870 "dma_device_type": 1 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.870 "dma_device_type": 2 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "dma_device_id": "system", 00:13:04.870 "dma_device_type": 1 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.870 "dma_device_type": 2 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "dma_device_id": "system", 00:13:04.870 "dma_device_type": 1 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.870 "dma_device_type": 2 00:13:04.870 } 00:13:04.870 ], 00:13:04.870 "driver_specific": { 00:13:04.870 "raid": { 00:13:04.870 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:04.870 "strip_size_kb": 0, 00:13:04.870 "state": "online", 00:13:04.870 "raid_level": "raid1", 00:13:04.870 "superblock": true, 00:13:04.870 "num_base_bdevs": 3, 00:13:04.870 "num_base_bdevs_discovered": 3, 00:13:04.870 "num_base_bdevs_operational": 3, 00:13:04.870 "base_bdevs_list": [ 00:13:04.870 { 00:13:04.870 "name": "pt1", 00:13:04.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.870 "is_configured": true, 00:13:04.870 "data_offset": 2048, 00:13:04.870 "data_size": 63488 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "name": "pt2", 00:13:04.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.870 "is_configured": true, 00:13:04.870 "data_offset": 2048, 00:13:04.870 "data_size": 63488 00:13:04.870 }, 00:13:04.870 { 00:13:04.870 "name": "pt3", 00:13:04.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.870 "is_configured": true, 00:13:04.870 "data_offset": 2048, 00:13:04.870 "data_size": 63488 00:13:04.870 } 00:13:04.870 ] 00:13:04.870 } 00:13:04.870 } 00:13:04.870 }' 00:13:04.870 06:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:05.129 pt2 00:13:05.129 pt3' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.129 [2024-11-26 06:22:49.215630] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 22688430-e2c9-41a2-b471-f362646e0a14 '!=' 22688430-e2c9-41a2-b471-f362646e0a14 ']' 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.129 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.388 [2024-11-26 06:22:49.267294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.388 "name": "raid_bdev1", 00:13:05.388 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:05.388 "strip_size_kb": 0, 00:13:05.388 "state": "online", 00:13:05.388 "raid_level": "raid1", 00:13:05.388 "superblock": true, 00:13:05.388 "num_base_bdevs": 3, 00:13:05.388 "num_base_bdevs_discovered": 2, 00:13:05.388 "num_base_bdevs_operational": 2, 00:13:05.388 "base_bdevs_list": [ 00:13:05.388 { 00:13:05.388 "name": null, 00:13:05.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.388 "is_configured": false, 00:13:05.388 "data_offset": 0, 00:13:05.388 "data_size": 63488 00:13:05.388 }, 00:13:05.388 { 00:13:05.388 "name": "pt2", 00:13:05.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.388 "is_configured": true, 00:13:05.388 "data_offset": 2048, 00:13:05.388 "data_size": 63488 00:13:05.388 }, 00:13:05.388 { 00:13:05.388 "name": "pt3", 00:13:05.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.388 "is_configured": true, 00:13:05.388 "data_offset": 2048, 00:13:05.388 "data_size": 63488 00:13:05.388 } 00:13:05.388 ] 00:13:05.388 }' 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.388 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.645 [2024-11-26 06:22:49.738367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.645 [2024-11-26 06:22:49.738458] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.645 [2024-11-26 06:22:49.738613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.645 [2024-11-26 06:22:49.738726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.645 [2024-11-26 06:22:49.738797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.645 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.930 [2024-11-26 06:22:49.830208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:05.930 [2024-11-26 06:22:49.830335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.930 [2024-11-26 06:22:49.830377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:05.930 [2024-11-26 06:22:49.830416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.930 [2024-11-26 06:22:49.833431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.930 [2024-11-26 06:22:49.833519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:05.930 [2024-11-26 06:22:49.833672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:05.930 [2024-11-26 06:22:49.833781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:05.930 pt2 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.930 "name": "raid_bdev1", 00:13:05.930 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:05.930 "strip_size_kb": 0, 00:13:05.930 "state": "configuring", 00:13:05.930 "raid_level": "raid1", 00:13:05.930 "superblock": true, 00:13:05.930 "num_base_bdevs": 3, 00:13:05.930 "num_base_bdevs_discovered": 1, 00:13:05.930 "num_base_bdevs_operational": 2, 00:13:05.930 "base_bdevs_list": [ 00:13:05.930 { 00:13:05.930 "name": null, 00:13:05.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.930 "is_configured": false, 00:13:05.930 "data_offset": 2048, 00:13:05.930 "data_size": 63488 00:13:05.930 }, 00:13:05.930 { 00:13:05.930 "name": "pt2", 00:13:05.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.930 "is_configured": true, 00:13:05.930 "data_offset": 2048, 00:13:05.930 "data_size": 63488 00:13:05.930 }, 00:13:05.930 { 00:13:05.930 "name": null, 00:13:05.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.930 "is_configured": false, 00:13:05.930 "data_offset": 2048, 00:13:05.930 "data_size": 63488 00:13:05.930 } 00:13:05.930 ] 00:13:05.930 }' 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.930 06:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.191 [2024-11-26 06:22:50.301452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.191 [2024-11-26 06:22:50.301606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.191 [2024-11-26 06:22:50.301668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:06.191 [2024-11-26 06:22:50.301706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.191 [2024-11-26 06:22:50.302357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.191 [2024-11-26 06:22:50.302443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.191 [2024-11-26 06:22:50.302626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:06.191 [2024-11-26 06:22:50.302713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.191 [2024-11-26 06:22:50.302910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:06.191 [2024-11-26 06:22:50.302956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.191 [2024-11-26 06:22:50.303350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:06.191 [2024-11-26 06:22:50.303589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:06.191 [2024-11-26 06:22:50.303632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:06.191 [2024-11-26 06:22:50.303916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.191 pt3 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.191 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.448 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.448 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.448 "name": "raid_bdev1", 00:13:06.448 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:06.448 "strip_size_kb": 0, 00:13:06.448 "state": "online", 00:13:06.448 "raid_level": "raid1", 00:13:06.448 "superblock": true, 00:13:06.448 "num_base_bdevs": 3, 00:13:06.448 "num_base_bdevs_discovered": 2, 00:13:06.448 "num_base_bdevs_operational": 2, 00:13:06.448 "base_bdevs_list": [ 00:13:06.448 { 00:13:06.448 "name": null, 00:13:06.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.448 "is_configured": false, 00:13:06.448 "data_offset": 2048, 00:13:06.448 "data_size": 63488 00:13:06.448 }, 00:13:06.448 { 00:13:06.448 "name": "pt2", 00:13:06.448 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.448 "is_configured": true, 00:13:06.448 "data_offset": 2048, 00:13:06.448 "data_size": 63488 00:13:06.448 }, 00:13:06.448 { 00:13:06.448 "name": "pt3", 00:13:06.448 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.448 "is_configured": true, 00:13:06.448 "data_offset": 2048, 00:13:06.448 "data_size": 63488 00:13:06.448 } 00:13:06.448 ] 00:13:06.448 }' 00:13:06.448 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.448 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 [2024-11-26 06:22:50.812569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.706 [2024-11-26 06:22:50.812675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.706 [2024-11-26 06:22:50.812852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.706 [2024-11-26 06:22:50.812984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.706 [2024-11-26 06:22:50.813067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:06.706 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 [2024-11-26 06:22:50.888461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:06.965 [2024-11-26 06:22:50.888537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.965 [2024-11-26 06:22:50.888569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:06.965 [2024-11-26 06:22:50.888580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.965 [2024-11-26 06:22:50.891585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.965 [2024-11-26 06:22:50.891628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:06.965 [2024-11-26 06:22:50.891743] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:06.965 [2024-11-26 06:22:50.891804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:06.965 [2024-11-26 06:22:50.891990] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:06.965 [2024-11-26 06:22:50.892003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.965 [2024-11-26 06:22:50.892022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:06.965 [2024-11-26 06:22:50.892109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.965 pt1 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.966 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.966 "name": "raid_bdev1", 00:13:06.966 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:06.966 "strip_size_kb": 0, 00:13:06.966 "state": "configuring", 00:13:06.966 "raid_level": "raid1", 00:13:06.966 "superblock": true, 00:13:06.966 "num_base_bdevs": 3, 00:13:06.966 "num_base_bdevs_discovered": 1, 00:13:06.966 "num_base_bdevs_operational": 2, 00:13:06.966 "base_bdevs_list": [ 00:13:06.966 { 00:13:06.966 "name": null, 00:13:06.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.966 "is_configured": false, 00:13:06.966 "data_offset": 2048, 00:13:06.966 "data_size": 63488 00:13:06.966 }, 00:13:06.966 { 00:13:06.966 "name": "pt2", 00:13:06.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.966 "is_configured": true, 00:13:06.966 "data_offset": 2048, 00:13:06.966 "data_size": 63488 00:13:06.966 }, 00:13:06.966 { 00:13:06.966 "name": null, 00:13:06.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.966 "is_configured": false, 00:13:06.966 "data_offset": 2048, 00:13:06.966 "data_size": 63488 00:13:06.966 } 00:13:06.966 ] 00:13:06.966 }' 00:13:06.966 06:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.966 06:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.225 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:07.225 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.225 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.225 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:07.225 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.483 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:07.483 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:07.483 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.484 [2024-11-26 06:22:51.395640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:07.484 [2024-11-26 06:22:51.395796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:07.484 [2024-11-26 06:22:51.395846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:07.484 [2024-11-26 06:22:51.395882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:07.484 [2024-11-26 06:22:51.396581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:07.484 [2024-11-26 06:22:51.396646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:07.484 [2024-11-26 06:22:51.396810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:07.484 [2024-11-26 06:22:51.396909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:07.484 [2024-11-26 06:22:51.397147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:07.484 [2024-11-26 06:22:51.397193] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:07.484 [2024-11-26 06:22:51.397565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:07.484 [2024-11-26 06:22:51.397821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:07.484 [2024-11-26 06:22:51.397874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:07.484 [2024-11-26 06:22:51.398139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.484 pt3 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.484 "name": "raid_bdev1", 00:13:07.484 "uuid": "22688430-e2c9-41a2-b471-f362646e0a14", 00:13:07.484 "strip_size_kb": 0, 00:13:07.484 "state": "online", 00:13:07.484 "raid_level": "raid1", 00:13:07.484 "superblock": true, 00:13:07.484 "num_base_bdevs": 3, 00:13:07.484 "num_base_bdevs_discovered": 2, 00:13:07.484 "num_base_bdevs_operational": 2, 00:13:07.484 "base_bdevs_list": [ 00:13:07.484 { 00:13:07.484 "name": null, 00:13:07.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.484 "is_configured": false, 00:13:07.484 "data_offset": 2048, 00:13:07.484 "data_size": 63488 00:13:07.484 }, 00:13:07.484 { 00:13:07.484 "name": "pt2", 00:13:07.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.484 "is_configured": true, 00:13:07.484 "data_offset": 2048, 00:13:07.484 "data_size": 63488 00:13:07.484 }, 00:13:07.484 { 00:13:07.484 "name": "pt3", 00:13:07.484 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.484 "is_configured": true, 00:13:07.484 "data_offset": 2048, 00:13:07.484 "data_size": 63488 00:13:07.484 } 00:13:07.484 ] 00:13:07.484 }' 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.484 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.051 [2024-11-26 06:22:51.955008] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 22688430-e2c9-41a2-b471-f362646e0a14 '!=' 22688430-e2c9-41a2-b471-f362646e0a14 ']' 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69081 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69081 ']' 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69081 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.051 06:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69081 00:13:08.051 06:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.051 06:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.051 killing process with pid 69081 00:13:08.051 06:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69081' 00:13:08.051 06:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69081 00:13:08.051 [2024-11-26 06:22:52.021267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.051 [2024-11-26 06:22:52.021394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.051 06:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69081 00:13:08.051 [2024-11-26 06:22:52.021473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.051 [2024-11-26 06:22:52.021488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:08.310 [2024-11-26 06:22:52.410429] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.736 06:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:09.736 ************************************ 00:13:09.736 END TEST raid_superblock_test 00:13:09.736 ************************************ 00:13:09.736 00:13:09.736 real 0m8.615s 00:13:09.736 user 0m13.237s 00:13:09.736 sys 0m1.597s 00:13:09.736 06:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.736 06:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 06:22:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:09.736 06:22:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.736 06:22:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.736 06:22:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.736 ************************************ 00:13:09.736 START TEST raid_read_error_test 00:13:09.736 ************************************ 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:09.736 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:09.995 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.c2sBP1Ncx2 00:13:09.995 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69537 00:13:09.995 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69537 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69537 ']' 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.996 06:22:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.996 [2024-11-26 06:22:53.974392] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:09.996 [2024-11-26 06:22:53.974637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69537 ] 00:13:10.255 [2024-11-26 06:22:54.143018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.255 [2024-11-26 06:22:54.300341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.514 [2024-11-26 06:22:54.576993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.515 [2024-11-26 06:22:54.577081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.082 BaseBdev1_malloc 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.082 true 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.082 [2024-11-26 06:22:54.993395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:11.082 [2024-11-26 06:22:54.993471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.082 [2024-11-26 06:22:54.993507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:11.082 [2024-11-26 06:22:54.993521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.082 [2024-11-26 06:22:54.996503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.082 [2024-11-26 06:22:54.996592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.082 BaseBdev1 00:13:11.082 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.083 06:22:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.083 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 BaseBdev2_malloc 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 true 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 [2024-11-26 06:22:55.076250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:11.083 [2024-11-26 06:22:55.076323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.083 [2024-11-26 06:22:55.076347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:11.083 [2024-11-26 06:22:55.076361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.083 [2024-11-26 06:22:55.079185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.083 [2024-11-26 06:22:55.079228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.083 BaseBdev2 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 BaseBdev3_malloc 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 true 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 [2024-11-26 06:22:55.169817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:11.083 [2024-11-26 06:22:55.169959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.083 [2024-11-26 06:22:55.170007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:11.083 [2024-11-26 06:22:55.170094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.083 [2024-11-26 06:22:55.173203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.083 [2024-11-26 06:22:55.173289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:11.083 BaseBdev3 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.083 [2024-11-26 06:22:55.182023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.083 [2024-11-26 06:22:55.184569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.083 [2024-11-26 06:22:55.184730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.083 [2024-11-26 06:22:55.185112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:11.083 [2024-11-26 06:22:55.185176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:11.083 [2024-11-26 06:22:55.185573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:11.083 [2024-11-26 06:22:55.185847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:11.083 [2024-11-26 06:22:55.185900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:11.083 [2024-11-26 06:22:55.186211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.083 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.342 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.342 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.342 "name": "raid_bdev1", 00:13:11.342 "uuid": "ad30be77-9943-4a4b-b4cb-3c03643dcc95", 00:13:11.342 "strip_size_kb": 0, 00:13:11.342 "state": "online", 00:13:11.342 "raid_level": "raid1", 00:13:11.342 "superblock": true, 00:13:11.342 "num_base_bdevs": 3, 00:13:11.342 "num_base_bdevs_discovered": 3, 00:13:11.342 "num_base_bdevs_operational": 3, 00:13:11.342 "base_bdevs_list": [ 00:13:11.342 { 00:13:11.342 "name": "BaseBdev1", 00:13:11.342 "uuid": "35945c2d-0413-56e1-ad73-5ee1a7f88ef6", 00:13:11.342 "is_configured": true, 00:13:11.342 "data_offset": 2048, 00:13:11.342 "data_size": 63488 00:13:11.342 }, 00:13:11.342 { 00:13:11.342 "name": "BaseBdev2", 00:13:11.342 "uuid": "5174a7df-326f-56a7-8a82-e93251870978", 00:13:11.342 "is_configured": true, 00:13:11.342 "data_offset": 2048, 00:13:11.342 "data_size": 63488 00:13:11.342 }, 00:13:11.342 { 00:13:11.342 "name": "BaseBdev3", 00:13:11.342 "uuid": "030d863e-bbce-5258-8c52-629d3dc12c62", 00:13:11.342 "is_configured": true, 00:13:11.342 "data_offset": 2048, 00:13:11.342 "data_size": 63488 00:13:11.342 } 00:13:11.342 ] 00:13:11.342 }' 00:13:11.342 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.342 06:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.680 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:11.680 06:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:11.680 [2024-11-26 06:22:55.779042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.616 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.875 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.875 "name": "raid_bdev1", 00:13:12.875 "uuid": "ad30be77-9943-4a4b-b4cb-3c03643dcc95", 00:13:12.875 "strip_size_kb": 0, 00:13:12.875 "state": "online", 00:13:12.875 "raid_level": "raid1", 00:13:12.875 "superblock": true, 00:13:12.875 "num_base_bdevs": 3, 00:13:12.875 "num_base_bdevs_discovered": 3, 00:13:12.875 "num_base_bdevs_operational": 3, 00:13:12.875 "base_bdevs_list": [ 00:13:12.875 { 00:13:12.875 "name": "BaseBdev1", 00:13:12.875 "uuid": "35945c2d-0413-56e1-ad73-5ee1a7f88ef6", 00:13:12.875 "is_configured": true, 00:13:12.875 "data_offset": 2048, 00:13:12.875 "data_size": 63488 00:13:12.875 }, 00:13:12.875 { 00:13:12.875 "name": "BaseBdev2", 00:13:12.875 "uuid": "5174a7df-326f-56a7-8a82-e93251870978", 00:13:12.875 "is_configured": true, 00:13:12.875 "data_offset": 2048, 00:13:12.875 "data_size": 63488 00:13:12.875 }, 00:13:12.875 { 00:13:12.875 "name": "BaseBdev3", 00:13:12.875 "uuid": "030d863e-bbce-5258-8c52-629d3dc12c62", 00:13:12.875 "is_configured": true, 00:13:12.875 "data_offset": 2048, 00:13:12.875 "data_size": 63488 00:13:12.875 } 00:13:12.875 ] 00:13:12.875 }' 00:13:12.875 06:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.875 06:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.134 [2024-11-26 06:22:57.184012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.134 [2024-11-26 06:22:57.184067] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.134 [2024-11-26 06:22:57.187259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.134 [2024-11-26 06:22:57.187319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.134 [2024-11-26 06:22:57.187446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.134 [2024-11-26 06:22:57.187459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:13.134 { 00:13:13.134 "results": [ 00:13:13.134 { 00:13:13.134 "job": "raid_bdev1", 00:13:13.134 "core_mask": "0x1", 00:13:13.134 "workload": "randrw", 00:13:13.134 "percentage": 50, 00:13:13.134 "status": "finished", 00:13:13.134 "queue_depth": 1, 00:13:13.134 "io_size": 131072, 00:13:13.134 "runtime": 1.405024, 00:13:13.134 "iops": 8885.257476028879, 00:13:13.134 "mibps": 1110.6571845036099, 00:13:13.134 "io_failed": 0, 00:13:13.134 "io_timeout": 0, 00:13:13.134 "avg_latency_us": 109.5701484100522, 00:13:13.134 "min_latency_us": 25.823580786026202, 00:13:13.134 "max_latency_us": 1738.564192139738 00:13:13.134 } 00:13:13.134 ], 00:13:13.134 "core_count": 1 00:13:13.134 } 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69537 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69537 ']' 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69537 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69537 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69537' 00:13:13.134 killing process with pid 69537 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69537 00:13:13.134 [2024-11-26 06:22:57.227139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.134 06:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69537 00:13:13.394 [2024-11-26 06:22:57.517014] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.c2sBP1Ncx2 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:15.299 ************************************ 00:13:15.299 END TEST raid_read_error_test 00:13:15.299 ************************************ 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:15.299 00:13:15.299 real 0m5.091s 00:13:15.299 user 0m5.962s 00:13:15.299 sys 0m0.723s 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.299 06:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.299 06:22:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:15.299 06:22:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:15.299 06:22:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.299 06:22:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.299 ************************************ 00:13:15.299 START TEST raid_write_error_test 00:13:15.299 ************************************ 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.57jxm0ZgFf 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69686 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69686 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69686 ']' 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.299 06:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.299 [2024-11-26 06:22:59.130706] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:15.299 [2024-11-26 06:22:59.130868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69686 ] 00:13:15.299 [2024-11-26 06:22:59.312692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.558 [2024-11-26 06:22:59.461409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.817 [2024-11-26 06:22:59.728919] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.817 [2024-11-26 06:22:59.728995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 BaseBdev1_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 true 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 [2024-11-26 06:23:00.093145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:16.077 [2024-11-26 06:23:00.093213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.077 [2024-11-26 06:23:00.093255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:16.077 [2024-11-26 06:23:00.093269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.077 [2024-11-26 06:23:00.096047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.077 [2024-11-26 06:23:00.096107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.077 BaseBdev1 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 BaseBdev2_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 true 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.077 [2024-11-26 06:23:00.174377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:16.077 [2024-11-26 06:23:00.174454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.077 [2024-11-26 06:23:00.174481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:16.077 [2024-11-26 06:23:00.174494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.077 [2024-11-26 06:23:00.177440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.077 [2024-11-26 06:23:00.177549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.077 BaseBdev2 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.077 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.337 BaseBdev3_malloc 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.337 true 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.337 [2024-11-26 06:23:00.262302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:16.337 [2024-11-26 06:23:00.262370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.337 [2024-11-26 06:23:00.262394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:16.337 [2024-11-26 06:23:00.262406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.337 [2024-11-26 06:23:00.265209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.337 [2024-11-26 06:23:00.265252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.337 BaseBdev3 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.337 [2024-11-26 06:23:00.274355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.337 [2024-11-26 06:23:00.276693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.337 [2024-11-26 06:23:00.276773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.337 [2024-11-26 06:23:00.276991] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:16.337 [2024-11-26 06:23:00.277005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.337 [2024-11-26 06:23:00.277314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:16.337 [2024-11-26 06:23:00.277531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:16.337 [2024-11-26 06:23:00.277546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:16.337 [2024-11-26 06:23:00.277715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.337 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.338 "name": "raid_bdev1", 00:13:16.338 "uuid": "3a969cbc-682b-40e4-99a4-97f08f8ddec5", 00:13:16.338 "strip_size_kb": 0, 00:13:16.338 "state": "online", 00:13:16.338 "raid_level": "raid1", 00:13:16.338 "superblock": true, 00:13:16.338 "num_base_bdevs": 3, 00:13:16.338 "num_base_bdevs_discovered": 3, 00:13:16.338 "num_base_bdevs_operational": 3, 00:13:16.338 "base_bdevs_list": [ 00:13:16.338 { 00:13:16.338 "name": "BaseBdev1", 00:13:16.338 "uuid": "de75a229-2a15-5f24-bd07-ec493d7d4a03", 00:13:16.338 "is_configured": true, 00:13:16.338 "data_offset": 2048, 00:13:16.338 "data_size": 63488 00:13:16.338 }, 00:13:16.338 { 00:13:16.338 "name": "BaseBdev2", 00:13:16.338 "uuid": "124d4637-7b58-5c1e-a5fd-0e9232505087", 00:13:16.338 "is_configured": true, 00:13:16.338 "data_offset": 2048, 00:13:16.338 "data_size": 63488 00:13:16.338 }, 00:13:16.338 { 00:13:16.338 "name": "BaseBdev3", 00:13:16.338 "uuid": "98aaaacc-061c-5629-acff-f56ed37a9251", 00:13:16.338 "is_configured": true, 00:13:16.338 "data_offset": 2048, 00:13:16.338 "data_size": 63488 00:13:16.338 } 00:13:16.338 ] 00:13:16.338 }' 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.338 06:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.908 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:16.908 06:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.908 [2024-11-26 06:23:00.859144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.848 [2024-11-26 06:23:01.756635] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:17.848 [2024-11-26 06:23:01.756711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.848 [2024-11-26 06:23:01.756962] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.848 "name": "raid_bdev1", 00:13:17.848 "uuid": "3a969cbc-682b-40e4-99a4-97f08f8ddec5", 00:13:17.848 "strip_size_kb": 0, 00:13:17.848 "state": "online", 00:13:17.848 "raid_level": "raid1", 00:13:17.848 "superblock": true, 00:13:17.848 "num_base_bdevs": 3, 00:13:17.848 "num_base_bdevs_discovered": 2, 00:13:17.848 "num_base_bdevs_operational": 2, 00:13:17.848 "base_bdevs_list": [ 00:13:17.848 { 00:13:17.848 "name": null, 00:13:17.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.848 "is_configured": false, 00:13:17.848 "data_offset": 0, 00:13:17.848 "data_size": 63488 00:13:17.848 }, 00:13:17.848 { 00:13:17.848 "name": "BaseBdev2", 00:13:17.848 "uuid": "124d4637-7b58-5c1e-a5fd-0e9232505087", 00:13:17.848 "is_configured": true, 00:13:17.848 "data_offset": 2048, 00:13:17.848 "data_size": 63488 00:13:17.848 }, 00:13:17.848 { 00:13:17.848 "name": "BaseBdev3", 00:13:17.848 "uuid": "98aaaacc-061c-5629-acff-f56ed37a9251", 00:13:17.848 "is_configured": true, 00:13:17.848 "data_offset": 2048, 00:13:17.848 "data_size": 63488 00:13:17.848 } 00:13:17.848 ] 00:13:17.848 }' 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.848 06:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.107 [2024-11-26 06:23:02.213758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.107 [2024-11-26 06:23:02.213875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.107 [2024-11-26 06:23:02.217081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.107 [2024-11-26 06:23:02.217198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.107 [2024-11-26 06:23:02.217351] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.107 [2024-11-26 06:23:02.217408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:18.107 { 00:13:18.107 "results": [ 00:13:18.107 { 00:13:18.107 "job": "raid_bdev1", 00:13:18.107 "core_mask": "0x1", 00:13:18.107 "workload": "randrw", 00:13:18.107 "percentage": 50, 00:13:18.107 "status": "finished", 00:13:18.107 "queue_depth": 1, 00:13:18.107 "io_size": 131072, 00:13:18.107 "runtime": 1.354897, 00:13:18.107 "iops": 10016.99760203174, 00:13:18.107 "mibps": 1252.1247002539676, 00:13:18.107 "io_failed": 0, 00:13:18.107 "io_timeout": 0, 00:13:18.107 "avg_latency_us": 96.75532646844196, 00:13:18.107 "min_latency_us": 25.2646288209607, 00:13:18.107 "max_latency_us": 1638.4 00:13:18.107 } 00:13:18.107 ], 00:13:18.107 "core_count": 1 00:13:18.107 } 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69686 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69686 ']' 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69686 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.107 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69686 00:13:18.366 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.366 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.366 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69686' 00:13:18.366 killing process with pid 69686 00:13:18.366 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69686 00:13:18.366 [2024-11-26 06:23:02.268496] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.366 06:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69686 00:13:18.626 [2024-11-26 06:23:02.560888] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.57jxm0ZgFf 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:20.005 ************************************ 00:13:20.005 END TEST raid_write_error_test 00:13:20.005 ************************************ 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:20.005 00:13:20.005 real 0m5.008s 00:13:20.005 user 0m5.767s 00:13:20.005 sys 0m0.769s 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.005 06:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.005 06:23:04 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:20.005 06:23:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:20.005 06:23:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:20.005 06:23:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:20.005 06:23:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.005 06:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.005 ************************************ 00:13:20.005 START TEST raid_state_function_test 00:13:20.005 ************************************ 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:20.005 Process raid pid: 69835 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69835 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69835' 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69835 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69835 ']' 00:13:20.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.005 06:23:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.265 [2024-11-26 06:23:04.208448] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:20.265 [2024-11-26 06:23:04.208578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.265 [2024-11-26 06:23:04.387098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.524 [2024-11-26 06:23:04.516898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.783 [2024-11-26 06:23:04.736161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.783 [2024-11-26 06:23:04.736205] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.042 [2024-11-26 06:23:05.082275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.042 [2024-11-26 06:23:05.082438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.042 [2024-11-26 06:23:05.082455] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.042 [2024-11-26 06:23:05.082483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.042 [2024-11-26 06:23:05.082491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.042 [2024-11-26 06:23:05.082501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.042 [2024-11-26 06:23:05.082508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.042 [2024-11-26 06:23:05.082517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.042 "name": "Existed_Raid", 00:13:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.042 "strip_size_kb": 64, 00:13:21.042 "state": "configuring", 00:13:21.042 "raid_level": "raid0", 00:13:21.042 "superblock": false, 00:13:21.042 "num_base_bdevs": 4, 00:13:21.042 "num_base_bdevs_discovered": 0, 00:13:21.042 "num_base_bdevs_operational": 4, 00:13:21.042 "base_bdevs_list": [ 00:13:21.042 { 00:13:21.042 "name": "BaseBdev1", 00:13:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.042 "is_configured": false, 00:13:21.042 "data_offset": 0, 00:13:21.042 "data_size": 0 00:13:21.042 }, 00:13:21.042 { 00:13:21.042 "name": "BaseBdev2", 00:13:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.042 "is_configured": false, 00:13:21.042 "data_offset": 0, 00:13:21.042 "data_size": 0 00:13:21.042 }, 00:13:21.042 { 00:13:21.042 "name": "BaseBdev3", 00:13:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.042 "is_configured": false, 00:13:21.042 "data_offset": 0, 00:13:21.042 "data_size": 0 00:13:21.042 }, 00:13:21.042 { 00:13:21.042 "name": "BaseBdev4", 00:13:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.042 "is_configured": false, 00:13:21.042 "data_offset": 0, 00:13:21.042 "data_size": 0 00:13:21.042 } 00:13:21.042 ] 00:13:21.042 }' 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.042 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.610 [2024-11-26 06:23:05.573407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.610 [2024-11-26 06:23:05.573560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.610 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.610 [2024-11-26 06:23:05.585381] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.611 [2024-11-26 06:23:05.585528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.611 [2024-11-26 06:23:05.585545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.611 [2024-11-26 06:23:05.585557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.611 [2024-11-26 06:23:05.585565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.611 [2024-11-26 06:23:05.585590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.611 [2024-11-26 06:23:05.585598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.611 [2024-11-26 06:23:05.585609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.611 [2024-11-26 06:23:05.634312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.611 BaseBdev1 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.611 [ 00:13:21.611 { 00:13:21.611 "name": "BaseBdev1", 00:13:21.611 "aliases": [ 00:13:21.611 "0f87775b-e5fc-477b-8fb4-b6f72d2a521d" 00:13:21.611 ], 00:13:21.611 "product_name": "Malloc disk", 00:13:21.611 "block_size": 512, 00:13:21.611 "num_blocks": 65536, 00:13:21.611 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:21.611 "assigned_rate_limits": { 00:13:21.611 "rw_ios_per_sec": 0, 00:13:21.611 "rw_mbytes_per_sec": 0, 00:13:21.611 "r_mbytes_per_sec": 0, 00:13:21.611 "w_mbytes_per_sec": 0 00:13:21.611 }, 00:13:21.611 "claimed": true, 00:13:21.611 "claim_type": "exclusive_write", 00:13:21.611 "zoned": false, 00:13:21.611 "supported_io_types": { 00:13:21.611 "read": true, 00:13:21.611 "write": true, 00:13:21.611 "unmap": true, 00:13:21.611 "flush": true, 00:13:21.611 "reset": true, 00:13:21.611 "nvme_admin": false, 00:13:21.611 "nvme_io": false, 00:13:21.611 "nvme_io_md": false, 00:13:21.611 "write_zeroes": true, 00:13:21.611 "zcopy": true, 00:13:21.611 "get_zone_info": false, 00:13:21.611 "zone_management": false, 00:13:21.611 "zone_append": false, 00:13:21.611 "compare": false, 00:13:21.611 "compare_and_write": false, 00:13:21.611 "abort": true, 00:13:21.611 "seek_hole": false, 00:13:21.611 "seek_data": false, 00:13:21.611 "copy": true, 00:13:21.611 "nvme_iov_md": false 00:13:21.611 }, 00:13:21.611 "memory_domains": [ 00:13:21.611 { 00:13:21.611 "dma_device_id": "system", 00:13:21.611 "dma_device_type": 1 00:13:21.611 }, 00:13:21.611 { 00:13:21.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.611 "dma_device_type": 2 00:13:21.611 } 00:13:21.611 ], 00:13:21.611 "driver_specific": {} 00:13:21.611 } 00:13:21.611 ] 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.611 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.870 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.870 "name": "Existed_Raid", 00:13:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.870 "strip_size_kb": 64, 00:13:21.870 "state": "configuring", 00:13:21.870 "raid_level": "raid0", 00:13:21.870 "superblock": false, 00:13:21.870 "num_base_bdevs": 4, 00:13:21.870 "num_base_bdevs_discovered": 1, 00:13:21.870 "num_base_bdevs_operational": 4, 00:13:21.870 "base_bdevs_list": [ 00:13:21.870 { 00:13:21.870 "name": "BaseBdev1", 00:13:21.870 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:21.870 "is_configured": true, 00:13:21.870 "data_offset": 0, 00:13:21.870 "data_size": 65536 00:13:21.870 }, 00:13:21.870 { 00:13:21.870 "name": "BaseBdev2", 00:13:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.870 "is_configured": false, 00:13:21.870 "data_offset": 0, 00:13:21.870 "data_size": 0 00:13:21.870 }, 00:13:21.870 { 00:13:21.870 "name": "BaseBdev3", 00:13:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.870 "is_configured": false, 00:13:21.870 "data_offset": 0, 00:13:21.870 "data_size": 0 00:13:21.870 }, 00:13:21.870 { 00:13:21.870 "name": "BaseBdev4", 00:13:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.870 "is_configured": false, 00:13:21.870 "data_offset": 0, 00:13:21.870 "data_size": 0 00:13:21.870 } 00:13:21.870 ] 00:13:21.870 }' 00:13:21.870 06:23:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.870 06:23:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.129 [2024-11-26 06:23:06.137590] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.129 [2024-11-26 06:23:06.137681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.129 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.129 [2024-11-26 06:23:06.149584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.130 [2024-11-26 06:23:06.151772] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.130 [2024-11-26 06:23:06.151821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.130 [2024-11-26 06:23:06.151832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.130 [2024-11-26 06:23:06.151844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.130 [2024-11-26 06:23:06.151852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.130 [2024-11-26 06:23:06.151861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.130 "name": "Existed_Raid", 00:13:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.130 "strip_size_kb": 64, 00:13:22.130 "state": "configuring", 00:13:22.130 "raid_level": "raid0", 00:13:22.130 "superblock": false, 00:13:22.130 "num_base_bdevs": 4, 00:13:22.130 "num_base_bdevs_discovered": 1, 00:13:22.130 "num_base_bdevs_operational": 4, 00:13:22.130 "base_bdevs_list": [ 00:13:22.130 { 00:13:22.130 "name": "BaseBdev1", 00:13:22.130 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:22.130 "is_configured": true, 00:13:22.130 "data_offset": 0, 00:13:22.130 "data_size": 65536 00:13:22.130 }, 00:13:22.130 { 00:13:22.130 "name": "BaseBdev2", 00:13:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.130 "is_configured": false, 00:13:22.130 "data_offset": 0, 00:13:22.130 "data_size": 0 00:13:22.130 }, 00:13:22.130 { 00:13:22.130 "name": "BaseBdev3", 00:13:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.130 "is_configured": false, 00:13:22.130 "data_offset": 0, 00:13:22.130 "data_size": 0 00:13:22.130 }, 00:13:22.130 { 00:13:22.130 "name": "BaseBdev4", 00:13:22.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.130 "is_configured": false, 00:13:22.130 "data_offset": 0, 00:13:22.130 "data_size": 0 00:13:22.130 } 00:13:22.130 ] 00:13:22.130 }' 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.130 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.696 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.696 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 [2024-11-26 06:23:06.686343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.697 BaseBdev2 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 [ 00:13:22.697 { 00:13:22.697 "name": "BaseBdev2", 00:13:22.697 "aliases": [ 00:13:22.697 "97f1a0a5-2485-4542-9d20-b549604135fb" 00:13:22.697 ], 00:13:22.697 "product_name": "Malloc disk", 00:13:22.697 "block_size": 512, 00:13:22.697 "num_blocks": 65536, 00:13:22.697 "uuid": "97f1a0a5-2485-4542-9d20-b549604135fb", 00:13:22.697 "assigned_rate_limits": { 00:13:22.697 "rw_ios_per_sec": 0, 00:13:22.697 "rw_mbytes_per_sec": 0, 00:13:22.697 "r_mbytes_per_sec": 0, 00:13:22.697 "w_mbytes_per_sec": 0 00:13:22.697 }, 00:13:22.697 "claimed": true, 00:13:22.697 "claim_type": "exclusive_write", 00:13:22.697 "zoned": false, 00:13:22.697 "supported_io_types": { 00:13:22.697 "read": true, 00:13:22.697 "write": true, 00:13:22.697 "unmap": true, 00:13:22.697 "flush": true, 00:13:22.697 "reset": true, 00:13:22.697 "nvme_admin": false, 00:13:22.697 "nvme_io": false, 00:13:22.697 "nvme_io_md": false, 00:13:22.697 "write_zeroes": true, 00:13:22.697 "zcopy": true, 00:13:22.697 "get_zone_info": false, 00:13:22.697 "zone_management": false, 00:13:22.697 "zone_append": false, 00:13:22.697 "compare": false, 00:13:22.697 "compare_and_write": false, 00:13:22.697 "abort": true, 00:13:22.697 "seek_hole": false, 00:13:22.697 "seek_data": false, 00:13:22.697 "copy": true, 00:13:22.697 "nvme_iov_md": false 00:13:22.697 }, 00:13:22.697 "memory_domains": [ 00:13:22.697 { 00:13:22.697 "dma_device_id": "system", 00:13:22.697 "dma_device_type": 1 00:13:22.697 }, 00:13:22.697 { 00:13:22.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.697 "dma_device_type": 2 00:13:22.697 } 00:13:22.697 ], 00:13:22.697 "driver_specific": {} 00:13:22.697 } 00:13:22.697 ] 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.697 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.697 "name": "Existed_Raid", 00:13:22.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.697 "strip_size_kb": 64, 00:13:22.697 "state": "configuring", 00:13:22.697 "raid_level": "raid0", 00:13:22.697 "superblock": false, 00:13:22.697 "num_base_bdevs": 4, 00:13:22.697 "num_base_bdevs_discovered": 2, 00:13:22.697 "num_base_bdevs_operational": 4, 00:13:22.697 "base_bdevs_list": [ 00:13:22.697 { 00:13:22.697 "name": "BaseBdev1", 00:13:22.697 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:22.697 "is_configured": true, 00:13:22.697 "data_offset": 0, 00:13:22.697 "data_size": 65536 00:13:22.697 }, 00:13:22.697 { 00:13:22.697 "name": "BaseBdev2", 00:13:22.697 "uuid": "97f1a0a5-2485-4542-9d20-b549604135fb", 00:13:22.697 "is_configured": true, 00:13:22.697 "data_offset": 0, 00:13:22.697 "data_size": 65536 00:13:22.697 }, 00:13:22.697 { 00:13:22.697 "name": "BaseBdev3", 00:13:22.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.697 "is_configured": false, 00:13:22.697 "data_offset": 0, 00:13:22.697 "data_size": 0 00:13:22.697 }, 00:13:22.697 { 00:13:22.697 "name": "BaseBdev4", 00:13:22.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.697 "is_configured": false, 00:13:22.697 "data_offset": 0, 00:13:22.697 "data_size": 0 00:13:22.697 } 00:13:22.697 ] 00:13:22.697 }' 00:13:22.698 06:23:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.698 06:23:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.265 [2024-11-26 06:23:07.251983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.265 BaseBdev3 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.265 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.266 [ 00:13:23.266 { 00:13:23.266 "name": "BaseBdev3", 00:13:23.266 "aliases": [ 00:13:23.266 "7ff98195-18d8-408f-afca-2c8d03ab7dc2" 00:13:23.266 ], 00:13:23.266 "product_name": "Malloc disk", 00:13:23.266 "block_size": 512, 00:13:23.266 "num_blocks": 65536, 00:13:23.266 "uuid": "7ff98195-18d8-408f-afca-2c8d03ab7dc2", 00:13:23.266 "assigned_rate_limits": { 00:13:23.266 "rw_ios_per_sec": 0, 00:13:23.266 "rw_mbytes_per_sec": 0, 00:13:23.266 "r_mbytes_per_sec": 0, 00:13:23.266 "w_mbytes_per_sec": 0 00:13:23.266 }, 00:13:23.266 "claimed": true, 00:13:23.266 "claim_type": "exclusive_write", 00:13:23.266 "zoned": false, 00:13:23.266 "supported_io_types": { 00:13:23.266 "read": true, 00:13:23.266 "write": true, 00:13:23.266 "unmap": true, 00:13:23.266 "flush": true, 00:13:23.266 "reset": true, 00:13:23.266 "nvme_admin": false, 00:13:23.266 "nvme_io": false, 00:13:23.266 "nvme_io_md": false, 00:13:23.266 "write_zeroes": true, 00:13:23.266 "zcopy": true, 00:13:23.266 "get_zone_info": false, 00:13:23.266 "zone_management": false, 00:13:23.266 "zone_append": false, 00:13:23.266 "compare": false, 00:13:23.266 "compare_and_write": false, 00:13:23.266 "abort": true, 00:13:23.266 "seek_hole": false, 00:13:23.266 "seek_data": false, 00:13:23.266 "copy": true, 00:13:23.266 "nvme_iov_md": false 00:13:23.266 }, 00:13:23.266 "memory_domains": [ 00:13:23.266 { 00:13:23.266 "dma_device_id": "system", 00:13:23.266 "dma_device_type": 1 00:13:23.266 }, 00:13:23.266 { 00:13:23.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.266 "dma_device_type": 2 00:13:23.266 } 00:13:23.266 ], 00:13:23.266 "driver_specific": {} 00:13:23.266 } 00:13:23.266 ] 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.266 "name": "Existed_Raid", 00:13:23.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.266 "strip_size_kb": 64, 00:13:23.266 "state": "configuring", 00:13:23.266 "raid_level": "raid0", 00:13:23.266 "superblock": false, 00:13:23.266 "num_base_bdevs": 4, 00:13:23.266 "num_base_bdevs_discovered": 3, 00:13:23.266 "num_base_bdevs_operational": 4, 00:13:23.266 "base_bdevs_list": [ 00:13:23.266 { 00:13:23.266 "name": "BaseBdev1", 00:13:23.266 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:23.266 "is_configured": true, 00:13:23.266 "data_offset": 0, 00:13:23.266 "data_size": 65536 00:13:23.266 }, 00:13:23.266 { 00:13:23.266 "name": "BaseBdev2", 00:13:23.266 "uuid": "97f1a0a5-2485-4542-9d20-b549604135fb", 00:13:23.266 "is_configured": true, 00:13:23.266 "data_offset": 0, 00:13:23.266 "data_size": 65536 00:13:23.266 }, 00:13:23.266 { 00:13:23.266 "name": "BaseBdev3", 00:13:23.266 "uuid": "7ff98195-18d8-408f-afca-2c8d03ab7dc2", 00:13:23.266 "is_configured": true, 00:13:23.266 "data_offset": 0, 00:13:23.266 "data_size": 65536 00:13:23.266 }, 00:13:23.266 { 00:13:23.266 "name": "BaseBdev4", 00:13:23.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.266 "is_configured": false, 00:13:23.266 "data_offset": 0, 00:13:23.266 "data_size": 0 00:13:23.266 } 00:13:23.266 ] 00:13:23.266 }' 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.266 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.835 [2024-11-26 06:23:07.805613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.835 [2024-11-26 06:23:07.805681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.835 [2024-11-26 06:23:07.805694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:23.835 [2024-11-26 06:23:07.806098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:23.835 [2024-11-26 06:23:07.806328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.835 [2024-11-26 06:23:07.806357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:23.835 BaseBdev4 00:13:23.835 [2024-11-26 06:23:07.806706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.835 [ 00:13:23.835 { 00:13:23.835 "name": "BaseBdev4", 00:13:23.835 "aliases": [ 00:13:23.835 "adc02392-148b-447c-a65c-f9ecab288440" 00:13:23.835 ], 00:13:23.835 "product_name": "Malloc disk", 00:13:23.835 "block_size": 512, 00:13:23.835 "num_blocks": 65536, 00:13:23.835 "uuid": "adc02392-148b-447c-a65c-f9ecab288440", 00:13:23.835 "assigned_rate_limits": { 00:13:23.835 "rw_ios_per_sec": 0, 00:13:23.835 "rw_mbytes_per_sec": 0, 00:13:23.835 "r_mbytes_per_sec": 0, 00:13:23.835 "w_mbytes_per_sec": 0 00:13:23.835 }, 00:13:23.835 "claimed": true, 00:13:23.835 "claim_type": "exclusive_write", 00:13:23.835 "zoned": false, 00:13:23.835 "supported_io_types": { 00:13:23.835 "read": true, 00:13:23.835 "write": true, 00:13:23.835 "unmap": true, 00:13:23.835 "flush": true, 00:13:23.835 "reset": true, 00:13:23.835 "nvme_admin": false, 00:13:23.835 "nvme_io": false, 00:13:23.835 "nvme_io_md": false, 00:13:23.835 "write_zeroes": true, 00:13:23.835 "zcopy": true, 00:13:23.835 "get_zone_info": false, 00:13:23.835 "zone_management": false, 00:13:23.835 "zone_append": false, 00:13:23.835 "compare": false, 00:13:23.835 "compare_and_write": false, 00:13:23.835 "abort": true, 00:13:23.835 "seek_hole": false, 00:13:23.835 "seek_data": false, 00:13:23.835 "copy": true, 00:13:23.835 "nvme_iov_md": false 00:13:23.835 }, 00:13:23.835 "memory_domains": [ 00:13:23.835 { 00:13:23.835 "dma_device_id": "system", 00:13:23.835 "dma_device_type": 1 00:13:23.835 }, 00:13:23.835 { 00:13:23.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.835 "dma_device_type": 2 00:13:23.835 } 00:13:23.835 ], 00:13:23.835 "driver_specific": {} 00:13:23.835 } 00:13:23.835 ] 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.835 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.835 "name": "Existed_Raid", 00:13:23.835 "uuid": "6ae3ac72-924c-4c4a-8528-70ab3a5481d9", 00:13:23.835 "strip_size_kb": 64, 00:13:23.835 "state": "online", 00:13:23.835 "raid_level": "raid0", 00:13:23.835 "superblock": false, 00:13:23.835 "num_base_bdevs": 4, 00:13:23.835 "num_base_bdevs_discovered": 4, 00:13:23.835 "num_base_bdevs_operational": 4, 00:13:23.835 "base_bdevs_list": [ 00:13:23.835 { 00:13:23.864 "name": "BaseBdev1", 00:13:23.864 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:23.864 "is_configured": true, 00:13:23.864 "data_offset": 0, 00:13:23.864 "data_size": 65536 00:13:23.864 }, 00:13:23.864 { 00:13:23.864 "name": "BaseBdev2", 00:13:23.864 "uuid": "97f1a0a5-2485-4542-9d20-b549604135fb", 00:13:23.864 "is_configured": true, 00:13:23.864 "data_offset": 0, 00:13:23.864 "data_size": 65536 00:13:23.864 }, 00:13:23.864 { 00:13:23.864 "name": "BaseBdev3", 00:13:23.864 "uuid": "7ff98195-18d8-408f-afca-2c8d03ab7dc2", 00:13:23.864 "is_configured": true, 00:13:23.864 "data_offset": 0, 00:13:23.864 "data_size": 65536 00:13:23.864 }, 00:13:23.864 { 00:13:23.864 "name": "BaseBdev4", 00:13:23.864 "uuid": "adc02392-148b-447c-a65c-f9ecab288440", 00:13:23.864 "is_configured": true, 00:13:23.864 "data_offset": 0, 00:13:23.864 "data_size": 65536 00:13:23.864 } 00:13:23.864 ] 00:13:23.864 }' 00:13:23.864 06:23:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.864 06:23:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.432 [2024-11-26 06:23:08.297368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.432 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.432 "name": "Existed_Raid", 00:13:24.432 "aliases": [ 00:13:24.432 "6ae3ac72-924c-4c4a-8528-70ab3a5481d9" 00:13:24.432 ], 00:13:24.432 "product_name": "Raid Volume", 00:13:24.432 "block_size": 512, 00:13:24.432 "num_blocks": 262144, 00:13:24.432 "uuid": "6ae3ac72-924c-4c4a-8528-70ab3a5481d9", 00:13:24.432 "assigned_rate_limits": { 00:13:24.432 "rw_ios_per_sec": 0, 00:13:24.432 "rw_mbytes_per_sec": 0, 00:13:24.432 "r_mbytes_per_sec": 0, 00:13:24.432 "w_mbytes_per_sec": 0 00:13:24.432 }, 00:13:24.432 "claimed": false, 00:13:24.432 "zoned": false, 00:13:24.432 "supported_io_types": { 00:13:24.432 "read": true, 00:13:24.432 "write": true, 00:13:24.432 "unmap": true, 00:13:24.432 "flush": true, 00:13:24.432 "reset": true, 00:13:24.432 "nvme_admin": false, 00:13:24.432 "nvme_io": false, 00:13:24.432 "nvme_io_md": false, 00:13:24.432 "write_zeroes": true, 00:13:24.432 "zcopy": false, 00:13:24.432 "get_zone_info": false, 00:13:24.432 "zone_management": false, 00:13:24.432 "zone_append": false, 00:13:24.432 "compare": false, 00:13:24.432 "compare_and_write": false, 00:13:24.432 "abort": false, 00:13:24.433 "seek_hole": false, 00:13:24.433 "seek_data": false, 00:13:24.433 "copy": false, 00:13:24.433 "nvme_iov_md": false 00:13:24.433 }, 00:13:24.433 "memory_domains": [ 00:13:24.433 { 00:13:24.433 "dma_device_id": "system", 00:13:24.433 "dma_device_type": 1 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.433 "dma_device_type": 2 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "system", 00:13:24.433 "dma_device_type": 1 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.433 "dma_device_type": 2 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "system", 00:13:24.433 "dma_device_type": 1 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.433 "dma_device_type": 2 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "system", 00:13:24.433 "dma_device_type": 1 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.433 "dma_device_type": 2 00:13:24.433 } 00:13:24.433 ], 00:13:24.433 "driver_specific": { 00:13:24.433 "raid": { 00:13:24.433 "uuid": "6ae3ac72-924c-4c4a-8528-70ab3a5481d9", 00:13:24.433 "strip_size_kb": 64, 00:13:24.433 "state": "online", 00:13:24.433 "raid_level": "raid0", 00:13:24.433 "superblock": false, 00:13:24.433 "num_base_bdevs": 4, 00:13:24.433 "num_base_bdevs_discovered": 4, 00:13:24.433 "num_base_bdevs_operational": 4, 00:13:24.433 "base_bdevs_list": [ 00:13:24.433 { 00:13:24.433 "name": "BaseBdev1", 00:13:24.433 "uuid": "0f87775b-e5fc-477b-8fb4-b6f72d2a521d", 00:13:24.433 "is_configured": true, 00:13:24.433 "data_offset": 0, 00:13:24.433 "data_size": 65536 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "name": "BaseBdev2", 00:13:24.433 "uuid": "97f1a0a5-2485-4542-9d20-b549604135fb", 00:13:24.433 "is_configured": true, 00:13:24.433 "data_offset": 0, 00:13:24.433 "data_size": 65536 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "name": "BaseBdev3", 00:13:24.433 "uuid": "7ff98195-18d8-408f-afca-2c8d03ab7dc2", 00:13:24.433 "is_configured": true, 00:13:24.433 "data_offset": 0, 00:13:24.433 "data_size": 65536 00:13:24.433 }, 00:13:24.433 { 00:13:24.433 "name": "BaseBdev4", 00:13:24.433 "uuid": "adc02392-148b-447c-a65c-f9ecab288440", 00:13:24.433 "is_configured": true, 00:13:24.433 "data_offset": 0, 00:13:24.433 "data_size": 65536 00:13:24.433 } 00:13:24.433 ] 00:13:24.433 } 00:13:24.433 } 00:13:24.433 }' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:24.433 BaseBdev2 00:13:24.433 BaseBdev3 00:13:24.433 BaseBdev4' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.433 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.692 [2024-11-26 06:23:08.612496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.692 [2024-11-26 06:23:08.612554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.692 [2024-11-26 06:23:08.612615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.692 "name": "Existed_Raid", 00:13:24.692 "uuid": "6ae3ac72-924c-4c4a-8528-70ab3a5481d9", 00:13:24.692 "strip_size_kb": 64, 00:13:24.692 "state": "offline", 00:13:24.692 "raid_level": "raid0", 00:13:24.692 "superblock": false, 00:13:24.692 "num_base_bdevs": 4, 00:13:24.692 "num_base_bdevs_discovered": 3, 00:13:24.692 "num_base_bdevs_operational": 3, 00:13:24.692 "base_bdevs_list": [ 00:13:24.692 { 00:13:24.692 "name": null, 00:13:24.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.692 "is_configured": false, 00:13:24.692 "data_offset": 0, 00:13:24.692 "data_size": 65536 00:13:24.692 }, 00:13:24.692 { 00:13:24.692 "name": "BaseBdev2", 00:13:24.692 "uuid": "97f1a0a5-2485-4542-9d20-b549604135fb", 00:13:24.692 "is_configured": true, 00:13:24.692 "data_offset": 0, 00:13:24.692 "data_size": 65536 00:13:24.692 }, 00:13:24.692 { 00:13:24.692 "name": "BaseBdev3", 00:13:24.692 "uuid": "7ff98195-18d8-408f-afca-2c8d03ab7dc2", 00:13:24.692 "is_configured": true, 00:13:24.692 "data_offset": 0, 00:13:24.692 "data_size": 65536 00:13:24.692 }, 00:13:24.692 { 00:13:24.692 "name": "BaseBdev4", 00:13:24.692 "uuid": "adc02392-148b-447c-a65c-f9ecab288440", 00:13:24.692 "is_configured": true, 00:13:24.692 "data_offset": 0, 00:13:24.692 "data_size": 65536 00:13:24.692 } 00:13:24.692 ] 00:13:24.692 }' 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.692 06:23:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.270 [2024-11-26 06:23:09.233656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.270 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.530 [2024-11-26 06:23:09.410611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.530 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.530 [2024-11-26 06:23:09.586450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:25.530 [2024-11-26 06:23:09.586522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.789 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 BaseBdev2 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 [ 00:13:25.790 { 00:13:25.790 "name": "BaseBdev2", 00:13:25.790 "aliases": [ 00:13:25.790 "86060dbb-b981-46e9-bae3-9ec6c41cc228" 00:13:25.790 ], 00:13:25.790 "product_name": "Malloc disk", 00:13:25.790 "block_size": 512, 00:13:25.790 "num_blocks": 65536, 00:13:25.790 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:25.790 "assigned_rate_limits": { 00:13:25.790 "rw_ios_per_sec": 0, 00:13:25.790 "rw_mbytes_per_sec": 0, 00:13:25.790 "r_mbytes_per_sec": 0, 00:13:25.790 "w_mbytes_per_sec": 0 00:13:25.790 }, 00:13:25.790 "claimed": false, 00:13:25.790 "zoned": false, 00:13:25.790 "supported_io_types": { 00:13:25.790 "read": true, 00:13:25.790 "write": true, 00:13:25.790 "unmap": true, 00:13:25.790 "flush": true, 00:13:25.790 "reset": true, 00:13:25.790 "nvme_admin": false, 00:13:25.790 "nvme_io": false, 00:13:25.790 "nvme_io_md": false, 00:13:25.790 "write_zeroes": true, 00:13:25.790 "zcopy": true, 00:13:25.790 "get_zone_info": false, 00:13:25.790 "zone_management": false, 00:13:25.790 "zone_append": false, 00:13:25.790 "compare": false, 00:13:25.790 "compare_and_write": false, 00:13:25.790 "abort": true, 00:13:25.790 "seek_hole": false, 00:13:25.790 "seek_data": false, 00:13:25.790 "copy": true, 00:13:25.790 "nvme_iov_md": false 00:13:25.790 }, 00:13:25.790 "memory_domains": [ 00:13:25.790 { 00:13:25.790 "dma_device_id": "system", 00:13:25.790 "dma_device_type": 1 00:13:25.790 }, 00:13:25.790 { 00:13:25.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.790 "dma_device_type": 2 00:13:25.790 } 00:13:25.790 ], 00:13:25.790 "driver_specific": {} 00:13:25.790 } 00:13:25.790 ] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 BaseBdev3 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.790 [ 00:13:25.790 { 00:13:25.790 "name": "BaseBdev3", 00:13:25.790 "aliases": [ 00:13:25.790 "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a" 00:13:25.790 ], 00:13:25.790 "product_name": "Malloc disk", 00:13:25.790 "block_size": 512, 00:13:25.790 "num_blocks": 65536, 00:13:25.790 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:25.790 "assigned_rate_limits": { 00:13:25.790 "rw_ios_per_sec": 0, 00:13:25.790 "rw_mbytes_per_sec": 0, 00:13:25.790 "r_mbytes_per_sec": 0, 00:13:25.790 "w_mbytes_per_sec": 0 00:13:25.790 }, 00:13:25.790 "claimed": false, 00:13:25.790 "zoned": false, 00:13:25.790 "supported_io_types": { 00:13:25.790 "read": true, 00:13:25.790 "write": true, 00:13:25.790 "unmap": true, 00:13:25.790 "flush": true, 00:13:25.790 "reset": true, 00:13:25.790 "nvme_admin": false, 00:13:25.790 "nvme_io": false, 00:13:25.790 "nvme_io_md": false, 00:13:25.790 "write_zeroes": true, 00:13:25.790 "zcopy": true, 00:13:25.790 "get_zone_info": false, 00:13:25.790 "zone_management": false, 00:13:25.790 "zone_append": false, 00:13:25.790 "compare": false, 00:13:25.790 "compare_and_write": false, 00:13:25.790 "abort": true, 00:13:25.790 "seek_hole": false, 00:13:25.790 "seek_data": false, 00:13:25.790 "copy": true, 00:13:25.790 "nvme_iov_md": false 00:13:25.790 }, 00:13:25.790 "memory_domains": [ 00:13:25.790 { 00:13:25.790 "dma_device_id": "system", 00:13:25.790 "dma_device_type": 1 00:13:25.790 }, 00:13:25.790 { 00:13:25.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.790 "dma_device_type": 2 00:13:25.790 } 00:13:25.790 ], 00:13:25.790 "driver_specific": {} 00:13:25.790 } 00:13:25.790 ] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.790 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:25.791 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.791 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.791 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:25.791 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.791 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 BaseBdev4 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.049 06:23:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 [ 00:13:26.049 { 00:13:26.049 "name": "BaseBdev4", 00:13:26.049 "aliases": [ 00:13:26.049 "d350bd2c-dc48-479c-a0eb-82fa188d66f2" 00:13:26.049 ], 00:13:26.049 "product_name": "Malloc disk", 00:13:26.049 "block_size": 512, 00:13:26.049 "num_blocks": 65536, 00:13:26.049 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:26.049 "assigned_rate_limits": { 00:13:26.049 "rw_ios_per_sec": 0, 00:13:26.049 "rw_mbytes_per_sec": 0, 00:13:26.049 "r_mbytes_per_sec": 0, 00:13:26.049 "w_mbytes_per_sec": 0 00:13:26.049 }, 00:13:26.049 "claimed": false, 00:13:26.049 "zoned": false, 00:13:26.049 "supported_io_types": { 00:13:26.049 "read": true, 00:13:26.049 "write": true, 00:13:26.049 "unmap": true, 00:13:26.049 "flush": true, 00:13:26.049 "reset": true, 00:13:26.049 "nvme_admin": false, 00:13:26.049 "nvme_io": false, 00:13:26.049 "nvme_io_md": false, 00:13:26.049 "write_zeroes": true, 00:13:26.049 "zcopy": true, 00:13:26.049 "get_zone_info": false, 00:13:26.049 "zone_management": false, 00:13:26.049 "zone_append": false, 00:13:26.049 "compare": false, 00:13:26.049 "compare_and_write": false, 00:13:26.049 "abort": true, 00:13:26.049 "seek_hole": false, 00:13:26.049 "seek_data": false, 00:13:26.049 "copy": true, 00:13:26.049 "nvme_iov_md": false 00:13:26.049 }, 00:13:26.049 "memory_domains": [ 00:13:26.049 { 00:13:26.049 "dma_device_id": "system", 00:13:26.049 "dma_device_type": 1 00:13:26.049 }, 00:13:26.049 { 00:13:26.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.049 "dma_device_type": 2 00:13:26.049 } 00:13:26.049 ], 00:13:26.049 "driver_specific": {} 00:13:26.049 } 00:13:26.049 ] 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 [2024-11-26 06:23:10.007384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.049 [2024-11-26 06:23:10.007440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.049 [2024-11-26 06:23:10.007473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.049 [2024-11-26 06:23:10.009668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.049 [2024-11-26 06:23:10.009736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.049 "name": "Existed_Raid", 00:13:26.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.049 "strip_size_kb": 64, 00:13:26.049 "state": "configuring", 00:13:26.049 "raid_level": "raid0", 00:13:26.049 "superblock": false, 00:13:26.049 "num_base_bdevs": 4, 00:13:26.049 "num_base_bdevs_discovered": 3, 00:13:26.049 "num_base_bdevs_operational": 4, 00:13:26.049 "base_bdevs_list": [ 00:13:26.049 { 00:13:26.049 "name": "BaseBdev1", 00:13:26.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.049 "is_configured": false, 00:13:26.049 "data_offset": 0, 00:13:26.049 "data_size": 0 00:13:26.049 }, 00:13:26.049 { 00:13:26.049 "name": "BaseBdev2", 00:13:26.049 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:26.049 "is_configured": true, 00:13:26.049 "data_offset": 0, 00:13:26.049 "data_size": 65536 00:13:26.049 }, 00:13:26.049 { 00:13:26.049 "name": "BaseBdev3", 00:13:26.049 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:26.049 "is_configured": true, 00:13:26.049 "data_offset": 0, 00:13:26.049 "data_size": 65536 00:13:26.049 }, 00:13:26.049 { 00:13:26.049 "name": "BaseBdev4", 00:13:26.049 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:26.049 "is_configured": true, 00:13:26.049 "data_offset": 0, 00:13:26.049 "data_size": 65536 00:13:26.049 } 00:13:26.049 ] 00:13:26.049 }' 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.049 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.632 [2024-11-26 06:23:10.506587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.632 "name": "Existed_Raid", 00:13:26.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.632 "strip_size_kb": 64, 00:13:26.632 "state": "configuring", 00:13:26.632 "raid_level": "raid0", 00:13:26.632 "superblock": false, 00:13:26.632 "num_base_bdevs": 4, 00:13:26.632 "num_base_bdevs_discovered": 2, 00:13:26.632 "num_base_bdevs_operational": 4, 00:13:26.632 "base_bdevs_list": [ 00:13:26.632 { 00:13:26.632 "name": "BaseBdev1", 00:13:26.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.632 "is_configured": false, 00:13:26.632 "data_offset": 0, 00:13:26.632 "data_size": 0 00:13:26.632 }, 00:13:26.632 { 00:13:26.632 "name": null, 00:13:26.632 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:26.632 "is_configured": false, 00:13:26.632 "data_offset": 0, 00:13:26.632 "data_size": 65536 00:13:26.632 }, 00:13:26.632 { 00:13:26.632 "name": "BaseBdev3", 00:13:26.632 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:26.632 "is_configured": true, 00:13:26.632 "data_offset": 0, 00:13:26.632 "data_size": 65536 00:13:26.632 }, 00:13:26.632 { 00:13:26.632 "name": "BaseBdev4", 00:13:26.632 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:26.632 "is_configured": true, 00:13:26.632 "data_offset": 0, 00:13:26.632 "data_size": 65536 00:13:26.632 } 00:13:26.632 ] 00:13:26.632 }' 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.632 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.889 06:23:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.149 [2024-11-26 06:23:11.034238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.149 BaseBdev1 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.149 [ 00:13:27.149 { 00:13:27.149 "name": "BaseBdev1", 00:13:27.149 "aliases": [ 00:13:27.149 "218a1e9f-0e91-4b78-80f9-c8733aa34c58" 00:13:27.149 ], 00:13:27.149 "product_name": "Malloc disk", 00:13:27.149 "block_size": 512, 00:13:27.149 "num_blocks": 65536, 00:13:27.149 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:27.149 "assigned_rate_limits": { 00:13:27.149 "rw_ios_per_sec": 0, 00:13:27.149 "rw_mbytes_per_sec": 0, 00:13:27.149 "r_mbytes_per_sec": 0, 00:13:27.149 "w_mbytes_per_sec": 0 00:13:27.149 }, 00:13:27.149 "claimed": true, 00:13:27.149 "claim_type": "exclusive_write", 00:13:27.149 "zoned": false, 00:13:27.149 "supported_io_types": { 00:13:27.149 "read": true, 00:13:27.149 "write": true, 00:13:27.149 "unmap": true, 00:13:27.149 "flush": true, 00:13:27.149 "reset": true, 00:13:27.149 "nvme_admin": false, 00:13:27.149 "nvme_io": false, 00:13:27.149 "nvme_io_md": false, 00:13:27.149 "write_zeroes": true, 00:13:27.149 "zcopy": true, 00:13:27.149 "get_zone_info": false, 00:13:27.149 "zone_management": false, 00:13:27.149 "zone_append": false, 00:13:27.149 "compare": false, 00:13:27.149 "compare_and_write": false, 00:13:27.149 "abort": true, 00:13:27.149 "seek_hole": false, 00:13:27.149 "seek_data": false, 00:13:27.149 "copy": true, 00:13:27.149 "nvme_iov_md": false 00:13:27.149 }, 00:13:27.149 "memory_domains": [ 00:13:27.149 { 00:13:27.149 "dma_device_id": "system", 00:13:27.149 "dma_device_type": 1 00:13:27.149 }, 00:13:27.149 { 00:13:27.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.149 "dma_device_type": 2 00:13:27.149 } 00:13:27.149 ], 00:13:27.149 "driver_specific": {} 00:13:27.149 } 00:13:27.149 ] 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.149 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.150 "name": "Existed_Raid", 00:13:27.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.150 "strip_size_kb": 64, 00:13:27.150 "state": "configuring", 00:13:27.150 "raid_level": "raid0", 00:13:27.150 "superblock": false, 00:13:27.150 "num_base_bdevs": 4, 00:13:27.150 "num_base_bdevs_discovered": 3, 00:13:27.150 "num_base_bdevs_operational": 4, 00:13:27.150 "base_bdevs_list": [ 00:13:27.150 { 00:13:27.150 "name": "BaseBdev1", 00:13:27.150 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:27.150 "is_configured": true, 00:13:27.150 "data_offset": 0, 00:13:27.150 "data_size": 65536 00:13:27.150 }, 00:13:27.150 { 00:13:27.150 "name": null, 00:13:27.150 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:27.150 "is_configured": false, 00:13:27.150 "data_offset": 0, 00:13:27.150 "data_size": 65536 00:13:27.150 }, 00:13:27.150 { 00:13:27.150 "name": "BaseBdev3", 00:13:27.150 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:27.150 "is_configured": true, 00:13:27.150 "data_offset": 0, 00:13:27.150 "data_size": 65536 00:13:27.150 }, 00:13:27.150 { 00:13:27.150 "name": "BaseBdev4", 00:13:27.150 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:27.150 "is_configured": true, 00:13:27.150 "data_offset": 0, 00:13:27.150 "data_size": 65536 00:13:27.150 } 00:13:27.150 ] 00:13:27.150 }' 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.150 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.409 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.409 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.409 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.409 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.409 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.668 [2024-11-26 06:23:11.565541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.668 "name": "Existed_Raid", 00:13:27.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.668 "strip_size_kb": 64, 00:13:27.668 "state": "configuring", 00:13:27.668 "raid_level": "raid0", 00:13:27.668 "superblock": false, 00:13:27.668 "num_base_bdevs": 4, 00:13:27.668 "num_base_bdevs_discovered": 2, 00:13:27.668 "num_base_bdevs_operational": 4, 00:13:27.668 "base_bdevs_list": [ 00:13:27.668 { 00:13:27.668 "name": "BaseBdev1", 00:13:27.668 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:27.668 "is_configured": true, 00:13:27.668 "data_offset": 0, 00:13:27.668 "data_size": 65536 00:13:27.668 }, 00:13:27.668 { 00:13:27.668 "name": null, 00:13:27.668 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:27.668 "is_configured": false, 00:13:27.668 "data_offset": 0, 00:13:27.668 "data_size": 65536 00:13:27.668 }, 00:13:27.668 { 00:13:27.668 "name": null, 00:13:27.668 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:27.668 "is_configured": false, 00:13:27.668 "data_offset": 0, 00:13:27.668 "data_size": 65536 00:13:27.668 }, 00:13:27.668 { 00:13:27.668 "name": "BaseBdev4", 00:13:27.668 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:27.668 "is_configured": true, 00:13:27.668 "data_offset": 0, 00:13:27.668 "data_size": 65536 00:13:27.668 } 00:13:27.668 ] 00:13:27.668 }' 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.668 06:23:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.926 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.926 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.926 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.926 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.926 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.185 [2024-11-26 06:23:12.088655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.185 "name": "Existed_Raid", 00:13:28.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.185 "strip_size_kb": 64, 00:13:28.185 "state": "configuring", 00:13:28.185 "raid_level": "raid0", 00:13:28.185 "superblock": false, 00:13:28.185 "num_base_bdevs": 4, 00:13:28.185 "num_base_bdevs_discovered": 3, 00:13:28.185 "num_base_bdevs_operational": 4, 00:13:28.185 "base_bdevs_list": [ 00:13:28.185 { 00:13:28.185 "name": "BaseBdev1", 00:13:28.185 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:28.185 "is_configured": true, 00:13:28.185 "data_offset": 0, 00:13:28.185 "data_size": 65536 00:13:28.185 }, 00:13:28.185 { 00:13:28.185 "name": null, 00:13:28.185 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:28.185 "is_configured": false, 00:13:28.185 "data_offset": 0, 00:13:28.185 "data_size": 65536 00:13:28.185 }, 00:13:28.185 { 00:13:28.185 "name": "BaseBdev3", 00:13:28.185 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:28.185 "is_configured": true, 00:13:28.185 "data_offset": 0, 00:13:28.185 "data_size": 65536 00:13:28.185 }, 00:13:28.185 { 00:13:28.185 "name": "BaseBdev4", 00:13:28.185 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:28.185 "is_configured": true, 00:13:28.185 "data_offset": 0, 00:13:28.185 "data_size": 65536 00:13:28.185 } 00:13:28.185 ] 00:13:28.185 }' 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.185 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.445 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.445 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.445 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.445 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.445 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 [2024-11-26 06:23:12.580033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.703 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.703 "name": "Existed_Raid", 00:13:28.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.703 "strip_size_kb": 64, 00:13:28.703 "state": "configuring", 00:13:28.703 "raid_level": "raid0", 00:13:28.703 "superblock": false, 00:13:28.703 "num_base_bdevs": 4, 00:13:28.703 "num_base_bdevs_discovered": 2, 00:13:28.703 "num_base_bdevs_operational": 4, 00:13:28.703 "base_bdevs_list": [ 00:13:28.703 { 00:13:28.703 "name": null, 00:13:28.703 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:28.703 "is_configured": false, 00:13:28.703 "data_offset": 0, 00:13:28.703 "data_size": 65536 00:13:28.703 }, 00:13:28.703 { 00:13:28.703 "name": null, 00:13:28.703 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:28.703 "is_configured": false, 00:13:28.703 "data_offset": 0, 00:13:28.703 "data_size": 65536 00:13:28.703 }, 00:13:28.703 { 00:13:28.703 "name": "BaseBdev3", 00:13:28.703 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:28.703 "is_configured": true, 00:13:28.703 "data_offset": 0, 00:13:28.703 "data_size": 65536 00:13:28.703 }, 00:13:28.704 { 00:13:28.704 "name": "BaseBdev4", 00:13:28.704 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:28.704 "is_configured": true, 00:13:28.704 "data_offset": 0, 00:13:28.704 "data_size": 65536 00:13:28.704 } 00:13:28.704 ] 00:13:28.704 }' 00:13:28.704 06:23:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.704 06:23:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.270 [2024-11-26 06:23:13.237788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.270 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.271 "name": "Existed_Raid", 00:13:29.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.271 "strip_size_kb": 64, 00:13:29.271 "state": "configuring", 00:13:29.271 "raid_level": "raid0", 00:13:29.271 "superblock": false, 00:13:29.271 "num_base_bdevs": 4, 00:13:29.271 "num_base_bdevs_discovered": 3, 00:13:29.271 "num_base_bdevs_operational": 4, 00:13:29.271 "base_bdevs_list": [ 00:13:29.271 { 00:13:29.271 "name": null, 00:13:29.271 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:29.271 "is_configured": false, 00:13:29.271 "data_offset": 0, 00:13:29.271 "data_size": 65536 00:13:29.271 }, 00:13:29.271 { 00:13:29.271 "name": "BaseBdev2", 00:13:29.271 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:29.271 "is_configured": true, 00:13:29.271 "data_offset": 0, 00:13:29.271 "data_size": 65536 00:13:29.271 }, 00:13:29.271 { 00:13:29.271 "name": "BaseBdev3", 00:13:29.271 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:29.271 "is_configured": true, 00:13:29.271 "data_offset": 0, 00:13:29.271 "data_size": 65536 00:13:29.271 }, 00:13:29.271 { 00:13:29.271 "name": "BaseBdev4", 00:13:29.271 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:29.271 "is_configured": true, 00:13:29.271 "data_offset": 0, 00:13:29.271 "data_size": 65536 00:13:29.271 } 00:13:29.271 ] 00:13:29.271 }' 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.271 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 218a1e9f-0e91-4b78-80f9-c8733aa34c58 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 [2024-11-26 06:23:13.855739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.840 [2024-11-26 06:23:13.855831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.840 [2024-11-26 06:23:13.855841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:29.840 [2024-11-26 06:23:13.856222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:29.840 [2024-11-26 06:23:13.856426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.840 [2024-11-26 06:23:13.856452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:29.840 [2024-11-26 06:23:13.856760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.840 NewBaseBdev 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.840 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.840 [ 00:13:29.840 { 00:13:29.840 "name": "NewBaseBdev", 00:13:29.840 "aliases": [ 00:13:29.840 "218a1e9f-0e91-4b78-80f9-c8733aa34c58" 00:13:29.840 ], 00:13:29.840 "product_name": "Malloc disk", 00:13:29.840 "block_size": 512, 00:13:29.840 "num_blocks": 65536, 00:13:29.840 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:29.840 "assigned_rate_limits": { 00:13:29.840 "rw_ios_per_sec": 0, 00:13:29.840 "rw_mbytes_per_sec": 0, 00:13:29.840 "r_mbytes_per_sec": 0, 00:13:29.840 "w_mbytes_per_sec": 0 00:13:29.840 }, 00:13:29.840 "claimed": true, 00:13:29.840 "claim_type": "exclusive_write", 00:13:29.840 "zoned": false, 00:13:29.840 "supported_io_types": { 00:13:29.840 "read": true, 00:13:29.840 "write": true, 00:13:29.840 "unmap": true, 00:13:29.840 "flush": true, 00:13:29.840 "reset": true, 00:13:29.840 "nvme_admin": false, 00:13:29.840 "nvme_io": false, 00:13:29.840 "nvme_io_md": false, 00:13:29.840 "write_zeroes": true, 00:13:29.840 "zcopy": true, 00:13:29.840 "get_zone_info": false, 00:13:29.840 "zone_management": false, 00:13:29.840 "zone_append": false, 00:13:29.840 "compare": false, 00:13:29.840 "compare_and_write": false, 00:13:29.840 "abort": true, 00:13:29.840 "seek_hole": false, 00:13:29.840 "seek_data": false, 00:13:29.840 "copy": true, 00:13:29.840 "nvme_iov_md": false 00:13:29.840 }, 00:13:29.840 "memory_domains": [ 00:13:29.840 { 00:13:29.840 "dma_device_id": "system", 00:13:29.840 "dma_device_type": 1 00:13:29.840 }, 00:13:29.840 { 00:13:29.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.840 "dma_device_type": 2 00:13:29.840 } 00:13:29.840 ], 00:13:29.840 "driver_specific": {} 00:13:29.841 } 00:13:29.841 ] 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.841 "name": "Existed_Raid", 00:13:29.841 "uuid": "33f7b413-c3b4-49fd-aee8-e5daa2865847", 00:13:29.841 "strip_size_kb": 64, 00:13:29.841 "state": "online", 00:13:29.841 "raid_level": "raid0", 00:13:29.841 "superblock": false, 00:13:29.841 "num_base_bdevs": 4, 00:13:29.841 "num_base_bdevs_discovered": 4, 00:13:29.841 "num_base_bdevs_operational": 4, 00:13:29.841 "base_bdevs_list": [ 00:13:29.841 { 00:13:29.841 "name": "NewBaseBdev", 00:13:29.841 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:29.841 "is_configured": true, 00:13:29.841 "data_offset": 0, 00:13:29.841 "data_size": 65536 00:13:29.841 }, 00:13:29.841 { 00:13:29.841 "name": "BaseBdev2", 00:13:29.841 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:29.841 "is_configured": true, 00:13:29.841 "data_offset": 0, 00:13:29.841 "data_size": 65536 00:13:29.841 }, 00:13:29.841 { 00:13:29.841 "name": "BaseBdev3", 00:13:29.841 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:29.841 "is_configured": true, 00:13:29.841 "data_offset": 0, 00:13:29.841 "data_size": 65536 00:13:29.841 }, 00:13:29.841 { 00:13:29.841 "name": "BaseBdev4", 00:13:29.841 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:29.841 "is_configured": true, 00:13:29.841 "data_offset": 0, 00:13:29.841 "data_size": 65536 00:13:29.841 } 00:13:29.841 ] 00:13:29.841 }' 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.841 06:23:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.410 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.411 [2024-11-26 06:23:14.339578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.411 "name": "Existed_Raid", 00:13:30.411 "aliases": [ 00:13:30.411 "33f7b413-c3b4-49fd-aee8-e5daa2865847" 00:13:30.411 ], 00:13:30.411 "product_name": "Raid Volume", 00:13:30.411 "block_size": 512, 00:13:30.411 "num_blocks": 262144, 00:13:30.411 "uuid": "33f7b413-c3b4-49fd-aee8-e5daa2865847", 00:13:30.411 "assigned_rate_limits": { 00:13:30.411 "rw_ios_per_sec": 0, 00:13:30.411 "rw_mbytes_per_sec": 0, 00:13:30.411 "r_mbytes_per_sec": 0, 00:13:30.411 "w_mbytes_per_sec": 0 00:13:30.411 }, 00:13:30.411 "claimed": false, 00:13:30.411 "zoned": false, 00:13:30.411 "supported_io_types": { 00:13:30.411 "read": true, 00:13:30.411 "write": true, 00:13:30.411 "unmap": true, 00:13:30.411 "flush": true, 00:13:30.411 "reset": true, 00:13:30.411 "nvme_admin": false, 00:13:30.411 "nvme_io": false, 00:13:30.411 "nvme_io_md": false, 00:13:30.411 "write_zeroes": true, 00:13:30.411 "zcopy": false, 00:13:30.411 "get_zone_info": false, 00:13:30.411 "zone_management": false, 00:13:30.411 "zone_append": false, 00:13:30.411 "compare": false, 00:13:30.411 "compare_and_write": false, 00:13:30.411 "abort": false, 00:13:30.411 "seek_hole": false, 00:13:30.411 "seek_data": false, 00:13:30.411 "copy": false, 00:13:30.411 "nvme_iov_md": false 00:13:30.411 }, 00:13:30.411 "memory_domains": [ 00:13:30.411 { 00:13:30.411 "dma_device_id": "system", 00:13:30.411 "dma_device_type": 1 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.411 "dma_device_type": 2 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "system", 00:13:30.411 "dma_device_type": 1 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.411 "dma_device_type": 2 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "system", 00:13:30.411 "dma_device_type": 1 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.411 "dma_device_type": 2 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "system", 00:13:30.411 "dma_device_type": 1 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.411 "dma_device_type": 2 00:13:30.411 } 00:13:30.411 ], 00:13:30.411 "driver_specific": { 00:13:30.411 "raid": { 00:13:30.411 "uuid": "33f7b413-c3b4-49fd-aee8-e5daa2865847", 00:13:30.411 "strip_size_kb": 64, 00:13:30.411 "state": "online", 00:13:30.411 "raid_level": "raid0", 00:13:30.411 "superblock": false, 00:13:30.411 "num_base_bdevs": 4, 00:13:30.411 "num_base_bdevs_discovered": 4, 00:13:30.411 "num_base_bdevs_operational": 4, 00:13:30.411 "base_bdevs_list": [ 00:13:30.411 { 00:13:30.411 "name": "NewBaseBdev", 00:13:30.411 "uuid": "218a1e9f-0e91-4b78-80f9-c8733aa34c58", 00:13:30.411 "is_configured": true, 00:13:30.411 "data_offset": 0, 00:13:30.411 "data_size": 65536 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "name": "BaseBdev2", 00:13:30.411 "uuid": "86060dbb-b981-46e9-bae3-9ec6c41cc228", 00:13:30.411 "is_configured": true, 00:13:30.411 "data_offset": 0, 00:13:30.411 "data_size": 65536 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "name": "BaseBdev3", 00:13:30.411 "uuid": "7d5ec3e5-3782-43a4-9061-3eb501f3ce3a", 00:13:30.411 "is_configured": true, 00:13:30.411 "data_offset": 0, 00:13:30.411 "data_size": 65536 00:13:30.411 }, 00:13:30.411 { 00:13:30.411 "name": "BaseBdev4", 00:13:30.411 "uuid": "d350bd2c-dc48-479c-a0eb-82fa188d66f2", 00:13:30.411 "is_configured": true, 00:13:30.411 "data_offset": 0, 00:13:30.411 "data_size": 65536 00:13:30.411 } 00:13:30.411 ] 00:13:30.411 } 00:13:30.411 } 00:13:30.411 }' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.411 BaseBdev2 00:13:30.411 BaseBdev3 00:13:30.411 BaseBdev4' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.411 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.670 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.670 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.670 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.670 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.671 [2024-11-26 06:23:14.658584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.671 [2024-11-26 06:23:14.658627] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.671 [2024-11-26 06:23:14.658724] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.671 [2024-11-26 06:23:14.658804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.671 [2024-11-26 06:23:14.658825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69835 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69835 ']' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69835 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69835 00:13:30.671 killing process with pid 69835 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69835' 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69835 00:13:30.671 [2024-11-26 06:23:14.699903] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.671 06:23:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69835 00:13:31.238 [2024-11-26 06:23:15.181568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:32.616 00:13:32.616 real 0m12.448s 00:13:32.616 user 0m19.534s 00:13:32.616 sys 0m2.175s 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.616 ************************************ 00:13:32.616 END TEST raid_state_function_test 00:13:32.616 ************************************ 00:13:32.616 06:23:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:32.616 06:23:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:32.616 06:23:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.616 06:23:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.616 ************************************ 00:13:32.616 START TEST raid_state_function_test_sb 00:13:32.616 ************************************ 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:32.616 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70521 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:32.617 Process raid pid: 70521 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70521' 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70521 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70521 ']' 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.617 06:23:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.617 [2024-11-26 06:23:16.736233] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:32.617 [2024-11-26 06:23:16.736432] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.876 [2024-11-26 06:23:16.937199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.134 [2024-11-26 06:23:17.089077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.393 [2024-11-26 06:23:17.361008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.393 [2024-11-26 06:23:17.361081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.652 [2024-11-26 06:23:17.649035] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.652 [2024-11-26 06:23:17.649123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.652 [2024-11-26 06:23:17.649142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.652 [2024-11-26 06:23:17.649159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.652 [2024-11-26 06:23:17.649171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.652 [2024-11-26 06:23:17.649188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.652 [2024-11-26 06:23:17.649203] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:33.652 [2024-11-26 06:23:17.649221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.652 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.653 "name": "Existed_Raid", 00:13:33.653 "uuid": "9c965521-1b5b-4714-9c26-84fafcb68394", 00:13:33.653 "strip_size_kb": 64, 00:13:33.653 "state": "configuring", 00:13:33.653 "raid_level": "raid0", 00:13:33.653 "superblock": true, 00:13:33.653 "num_base_bdevs": 4, 00:13:33.653 "num_base_bdevs_discovered": 0, 00:13:33.653 "num_base_bdevs_operational": 4, 00:13:33.653 "base_bdevs_list": [ 00:13:33.653 { 00:13:33.653 "name": "BaseBdev1", 00:13:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.653 "is_configured": false, 00:13:33.653 "data_offset": 0, 00:13:33.653 "data_size": 0 00:13:33.653 }, 00:13:33.653 { 00:13:33.653 "name": "BaseBdev2", 00:13:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.653 "is_configured": false, 00:13:33.653 "data_offset": 0, 00:13:33.653 "data_size": 0 00:13:33.653 }, 00:13:33.653 { 00:13:33.653 "name": "BaseBdev3", 00:13:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.653 "is_configured": false, 00:13:33.653 "data_offset": 0, 00:13:33.653 "data_size": 0 00:13:33.653 }, 00:13:33.653 { 00:13:33.653 "name": "BaseBdev4", 00:13:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.653 "is_configured": false, 00:13:33.653 "data_offset": 0, 00:13:33.653 "data_size": 0 00:13:33.653 } 00:13:33.653 ] 00:13:33.653 }' 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.653 06:23:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.222 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.222 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.222 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.222 [2024-11-26 06:23:18.128241] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.222 [2024-11-26 06:23:18.128324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:34.222 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.222 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.222 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.223 [2024-11-26 06:23:18.140205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:34.223 [2024-11-26 06:23:18.140271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:34.223 [2024-11-26 06:23:18.140288] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.223 [2024-11-26 06:23:18.140306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.223 [2024-11-26 06:23:18.140316] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.223 [2024-11-26 06:23:18.140330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.223 [2024-11-26 06:23:18.140341] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.223 [2024-11-26 06:23:18.140355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.223 [2024-11-26 06:23:18.202175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.223 BaseBdev1 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.223 [ 00:13:34.223 { 00:13:34.223 "name": "BaseBdev1", 00:13:34.223 "aliases": [ 00:13:34.223 "415f803c-796f-4efb-a8f8-31ddc947c7dd" 00:13:34.223 ], 00:13:34.223 "product_name": "Malloc disk", 00:13:34.223 "block_size": 512, 00:13:34.223 "num_blocks": 65536, 00:13:34.223 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:34.223 "assigned_rate_limits": { 00:13:34.223 "rw_ios_per_sec": 0, 00:13:34.223 "rw_mbytes_per_sec": 0, 00:13:34.223 "r_mbytes_per_sec": 0, 00:13:34.223 "w_mbytes_per_sec": 0 00:13:34.223 }, 00:13:34.223 "claimed": true, 00:13:34.223 "claim_type": "exclusive_write", 00:13:34.223 "zoned": false, 00:13:34.223 "supported_io_types": { 00:13:34.223 "read": true, 00:13:34.223 "write": true, 00:13:34.223 "unmap": true, 00:13:34.223 "flush": true, 00:13:34.223 "reset": true, 00:13:34.223 "nvme_admin": false, 00:13:34.223 "nvme_io": false, 00:13:34.223 "nvme_io_md": false, 00:13:34.223 "write_zeroes": true, 00:13:34.223 "zcopy": true, 00:13:34.223 "get_zone_info": false, 00:13:34.223 "zone_management": false, 00:13:34.223 "zone_append": false, 00:13:34.223 "compare": false, 00:13:34.223 "compare_and_write": false, 00:13:34.223 "abort": true, 00:13:34.223 "seek_hole": false, 00:13:34.223 "seek_data": false, 00:13:34.223 "copy": true, 00:13:34.223 "nvme_iov_md": false 00:13:34.223 }, 00:13:34.223 "memory_domains": [ 00:13:34.223 { 00:13:34.223 "dma_device_id": "system", 00:13:34.223 "dma_device_type": 1 00:13:34.223 }, 00:13:34.223 { 00:13:34.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.223 "dma_device_type": 2 00:13:34.223 } 00:13:34.223 ], 00:13:34.223 "driver_specific": {} 00:13:34.223 } 00:13:34.223 ] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.223 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.223 "name": "Existed_Raid", 00:13:34.223 "uuid": "c47258ac-d5d9-4e94-ac36-5789ebf7076d", 00:13:34.223 "strip_size_kb": 64, 00:13:34.223 "state": "configuring", 00:13:34.223 "raid_level": "raid0", 00:13:34.223 "superblock": true, 00:13:34.223 "num_base_bdevs": 4, 00:13:34.223 "num_base_bdevs_discovered": 1, 00:13:34.223 "num_base_bdevs_operational": 4, 00:13:34.223 "base_bdevs_list": [ 00:13:34.223 { 00:13:34.223 "name": "BaseBdev1", 00:13:34.223 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:34.223 "is_configured": true, 00:13:34.223 "data_offset": 2048, 00:13:34.223 "data_size": 63488 00:13:34.223 }, 00:13:34.223 { 00:13:34.223 "name": "BaseBdev2", 00:13:34.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.223 "is_configured": false, 00:13:34.223 "data_offset": 0, 00:13:34.223 "data_size": 0 00:13:34.223 }, 00:13:34.223 { 00:13:34.223 "name": "BaseBdev3", 00:13:34.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.223 "is_configured": false, 00:13:34.223 "data_offset": 0, 00:13:34.223 "data_size": 0 00:13:34.223 }, 00:13:34.223 { 00:13:34.223 "name": "BaseBdev4", 00:13:34.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.224 "is_configured": false, 00:13:34.224 "data_offset": 0, 00:13:34.224 "data_size": 0 00:13:34.224 } 00:13:34.224 ] 00:13:34.224 }' 00:13:34.224 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.224 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.790 [2024-11-26 06:23:18.705508] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.790 [2024-11-26 06:23:18.705591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.790 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.791 [2024-11-26 06:23:18.717638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.791 [2024-11-26 06:23:18.719960] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.791 [2024-11-26 06:23:18.720016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.791 [2024-11-26 06:23:18.720029] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.791 [2024-11-26 06:23:18.720043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.791 [2024-11-26 06:23:18.720066] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.791 [2024-11-26 06:23:18.720078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.791 "name": "Existed_Raid", 00:13:34.791 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:34.791 "strip_size_kb": 64, 00:13:34.791 "state": "configuring", 00:13:34.791 "raid_level": "raid0", 00:13:34.791 "superblock": true, 00:13:34.791 "num_base_bdevs": 4, 00:13:34.791 "num_base_bdevs_discovered": 1, 00:13:34.791 "num_base_bdevs_operational": 4, 00:13:34.791 "base_bdevs_list": [ 00:13:34.791 { 00:13:34.791 "name": "BaseBdev1", 00:13:34.791 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:34.791 "is_configured": true, 00:13:34.791 "data_offset": 2048, 00:13:34.791 "data_size": 63488 00:13:34.791 }, 00:13:34.791 { 00:13:34.791 "name": "BaseBdev2", 00:13:34.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.791 "is_configured": false, 00:13:34.791 "data_offset": 0, 00:13:34.791 "data_size": 0 00:13:34.791 }, 00:13:34.791 { 00:13:34.791 "name": "BaseBdev3", 00:13:34.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.791 "is_configured": false, 00:13:34.791 "data_offset": 0, 00:13:34.791 "data_size": 0 00:13:34.791 }, 00:13:34.791 { 00:13:34.791 "name": "BaseBdev4", 00:13:34.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.791 "is_configured": false, 00:13:34.791 "data_offset": 0, 00:13:34.791 "data_size": 0 00:13:34.791 } 00:13:34.791 ] 00:13:34.791 }' 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.791 06:23:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.051 [2024-11-26 06:23:19.161031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.051 BaseBdev2 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.051 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.309 [ 00:13:35.309 { 00:13:35.309 "name": "BaseBdev2", 00:13:35.309 "aliases": [ 00:13:35.309 "db4d9707-f0cb-4480-b35a-f069aaf86231" 00:13:35.309 ], 00:13:35.309 "product_name": "Malloc disk", 00:13:35.309 "block_size": 512, 00:13:35.309 "num_blocks": 65536, 00:13:35.309 "uuid": "db4d9707-f0cb-4480-b35a-f069aaf86231", 00:13:35.309 "assigned_rate_limits": { 00:13:35.309 "rw_ios_per_sec": 0, 00:13:35.309 "rw_mbytes_per_sec": 0, 00:13:35.309 "r_mbytes_per_sec": 0, 00:13:35.309 "w_mbytes_per_sec": 0 00:13:35.309 }, 00:13:35.309 "claimed": true, 00:13:35.309 "claim_type": "exclusive_write", 00:13:35.309 "zoned": false, 00:13:35.309 "supported_io_types": { 00:13:35.309 "read": true, 00:13:35.309 "write": true, 00:13:35.309 "unmap": true, 00:13:35.309 "flush": true, 00:13:35.309 "reset": true, 00:13:35.309 "nvme_admin": false, 00:13:35.309 "nvme_io": false, 00:13:35.309 "nvme_io_md": false, 00:13:35.309 "write_zeroes": true, 00:13:35.309 "zcopy": true, 00:13:35.309 "get_zone_info": false, 00:13:35.309 "zone_management": false, 00:13:35.309 "zone_append": false, 00:13:35.310 "compare": false, 00:13:35.310 "compare_and_write": false, 00:13:35.310 "abort": true, 00:13:35.310 "seek_hole": false, 00:13:35.310 "seek_data": false, 00:13:35.310 "copy": true, 00:13:35.310 "nvme_iov_md": false 00:13:35.310 }, 00:13:35.310 "memory_domains": [ 00:13:35.310 { 00:13:35.310 "dma_device_id": "system", 00:13:35.310 "dma_device_type": 1 00:13:35.310 }, 00:13:35.310 { 00:13:35.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.310 "dma_device_type": 2 00:13:35.310 } 00:13:35.310 ], 00:13:35.310 "driver_specific": {} 00:13:35.310 } 00:13:35.310 ] 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.310 "name": "Existed_Raid", 00:13:35.310 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:35.310 "strip_size_kb": 64, 00:13:35.310 "state": "configuring", 00:13:35.310 "raid_level": "raid0", 00:13:35.310 "superblock": true, 00:13:35.310 "num_base_bdevs": 4, 00:13:35.310 "num_base_bdevs_discovered": 2, 00:13:35.310 "num_base_bdevs_operational": 4, 00:13:35.310 "base_bdevs_list": [ 00:13:35.310 { 00:13:35.310 "name": "BaseBdev1", 00:13:35.310 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:35.310 "is_configured": true, 00:13:35.310 "data_offset": 2048, 00:13:35.310 "data_size": 63488 00:13:35.310 }, 00:13:35.310 { 00:13:35.310 "name": "BaseBdev2", 00:13:35.310 "uuid": "db4d9707-f0cb-4480-b35a-f069aaf86231", 00:13:35.310 "is_configured": true, 00:13:35.310 "data_offset": 2048, 00:13:35.310 "data_size": 63488 00:13:35.310 }, 00:13:35.310 { 00:13:35.310 "name": "BaseBdev3", 00:13:35.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.310 "is_configured": false, 00:13:35.310 "data_offset": 0, 00:13:35.310 "data_size": 0 00:13:35.310 }, 00:13:35.310 { 00:13:35.310 "name": "BaseBdev4", 00:13:35.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.310 "is_configured": false, 00:13:35.310 "data_offset": 0, 00:13:35.310 "data_size": 0 00:13:35.310 } 00:13:35.310 ] 00:13:35.310 }' 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.310 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.568 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.568 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.568 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 [2024-11-26 06:23:19.706715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.868 BaseBdev3 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 [ 00:13:35.868 { 00:13:35.868 "name": "BaseBdev3", 00:13:35.868 "aliases": [ 00:13:35.868 "dbdcfa27-b0b7-44c2-b57b-f74392d1fab5" 00:13:35.868 ], 00:13:35.868 "product_name": "Malloc disk", 00:13:35.868 "block_size": 512, 00:13:35.868 "num_blocks": 65536, 00:13:35.868 "uuid": "dbdcfa27-b0b7-44c2-b57b-f74392d1fab5", 00:13:35.868 "assigned_rate_limits": { 00:13:35.868 "rw_ios_per_sec": 0, 00:13:35.868 "rw_mbytes_per_sec": 0, 00:13:35.868 "r_mbytes_per_sec": 0, 00:13:35.868 "w_mbytes_per_sec": 0 00:13:35.868 }, 00:13:35.868 "claimed": true, 00:13:35.868 "claim_type": "exclusive_write", 00:13:35.868 "zoned": false, 00:13:35.868 "supported_io_types": { 00:13:35.868 "read": true, 00:13:35.868 "write": true, 00:13:35.868 "unmap": true, 00:13:35.868 "flush": true, 00:13:35.868 "reset": true, 00:13:35.868 "nvme_admin": false, 00:13:35.868 "nvme_io": false, 00:13:35.868 "nvme_io_md": false, 00:13:35.868 "write_zeroes": true, 00:13:35.868 "zcopy": true, 00:13:35.868 "get_zone_info": false, 00:13:35.868 "zone_management": false, 00:13:35.868 "zone_append": false, 00:13:35.868 "compare": false, 00:13:35.868 "compare_and_write": false, 00:13:35.868 "abort": true, 00:13:35.868 "seek_hole": false, 00:13:35.868 "seek_data": false, 00:13:35.868 "copy": true, 00:13:35.868 "nvme_iov_md": false 00:13:35.868 }, 00:13:35.868 "memory_domains": [ 00:13:35.868 { 00:13:35.868 "dma_device_id": "system", 00:13:35.868 "dma_device_type": 1 00:13:35.868 }, 00:13:35.868 { 00:13:35.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.868 "dma_device_type": 2 00:13:35.868 } 00:13:35.868 ], 00:13:35.868 "driver_specific": {} 00:13:35.868 } 00:13:35.868 ] 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.868 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.869 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.869 "name": "Existed_Raid", 00:13:35.869 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:35.869 "strip_size_kb": 64, 00:13:35.869 "state": "configuring", 00:13:35.869 "raid_level": "raid0", 00:13:35.869 "superblock": true, 00:13:35.869 "num_base_bdevs": 4, 00:13:35.869 "num_base_bdevs_discovered": 3, 00:13:35.869 "num_base_bdevs_operational": 4, 00:13:35.869 "base_bdevs_list": [ 00:13:35.869 { 00:13:35.869 "name": "BaseBdev1", 00:13:35.869 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:35.869 "is_configured": true, 00:13:35.869 "data_offset": 2048, 00:13:35.869 "data_size": 63488 00:13:35.869 }, 00:13:35.869 { 00:13:35.869 "name": "BaseBdev2", 00:13:35.869 "uuid": "db4d9707-f0cb-4480-b35a-f069aaf86231", 00:13:35.869 "is_configured": true, 00:13:35.869 "data_offset": 2048, 00:13:35.869 "data_size": 63488 00:13:35.869 }, 00:13:35.869 { 00:13:35.869 "name": "BaseBdev3", 00:13:35.869 "uuid": "dbdcfa27-b0b7-44c2-b57b-f74392d1fab5", 00:13:35.869 "is_configured": true, 00:13:35.869 "data_offset": 2048, 00:13:35.869 "data_size": 63488 00:13:35.869 }, 00:13:35.869 { 00:13:35.869 "name": "BaseBdev4", 00:13:35.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.869 "is_configured": false, 00:13:35.869 "data_offset": 0, 00:13:35.869 "data_size": 0 00:13:35.869 } 00:13:35.869 ] 00:13:35.869 }' 00:13:35.869 06:23:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.869 06:23:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.132 [2024-11-26 06:23:20.245288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:36.132 [2024-11-26 06:23:20.245789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:36.132 [2024-11-26 06:23:20.245826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:36.132 BaseBdev4 00:13:36.132 [2024-11-26 06:23:20.246285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:36.132 [2024-11-26 06:23:20.246549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:36.132 [2024-11-26 06:23:20.246587] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:36.132 [2024-11-26 06:23:20.246826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.132 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.392 [ 00:13:36.392 { 00:13:36.392 "name": "BaseBdev4", 00:13:36.392 "aliases": [ 00:13:36.392 "54f279de-7e1d-4665-88ff-e2c17b6b25ab" 00:13:36.392 ], 00:13:36.392 "product_name": "Malloc disk", 00:13:36.392 "block_size": 512, 00:13:36.392 "num_blocks": 65536, 00:13:36.392 "uuid": "54f279de-7e1d-4665-88ff-e2c17b6b25ab", 00:13:36.392 "assigned_rate_limits": { 00:13:36.392 "rw_ios_per_sec": 0, 00:13:36.392 "rw_mbytes_per_sec": 0, 00:13:36.392 "r_mbytes_per_sec": 0, 00:13:36.392 "w_mbytes_per_sec": 0 00:13:36.392 }, 00:13:36.392 "claimed": true, 00:13:36.392 "claim_type": "exclusive_write", 00:13:36.392 "zoned": false, 00:13:36.392 "supported_io_types": { 00:13:36.392 "read": true, 00:13:36.392 "write": true, 00:13:36.392 "unmap": true, 00:13:36.392 "flush": true, 00:13:36.392 "reset": true, 00:13:36.392 "nvme_admin": false, 00:13:36.392 "nvme_io": false, 00:13:36.392 "nvme_io_md": false, 00:13:36.392 "write_zeroes": true, 00:13:36.392 "zcopy": true, 00:13:36.392 "get_zone_info": false, 00:13:36.392 "zone_management": false, 00:13:36.392 "zone_append": false, 00:13:36.392 "compare": false, 00:13:36.392 "compare_and_write": false, 00:13:36.392 "abort": true, 00:13:36.392 "seek_hole": false, 00:13:36.392 "seek_data": false, 00:13:36.392 "copy": true, 00:13:36.392 "nvme_iov_md": false 00:13:36.392 }, 00:13:36.392 "memory_domains": [ 00:13:36.392 { 00:13:36.392 "dma_device_id": "system", 00:13:36.392 "dma_device_type": 1 00:13:36.392 }, 00:13:36.392 { 00:13:36.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.392 "dma_device_type": 2 00:13:36.392 } 00:13:36.392 ], 00:13:36.392 "driver_specific": {} 00:13:36.392 } 00:13:36.392 ] 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.392 "name": "Existed_Raid", 00:13:36.392 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:36.392 "strip_size_kb": 64, 00:13:36.392 "state": "online", 00:13:36.392 "raid_level": "raid0", 00:13:36.392 "superblock": true, 00:13:36.392 "num_base_bdevs": 4, 00:13:36.392 "num_base_bdevs_discovered": 4, 00:13:36.392 "num_base_bdevs_operational": 4, 00:13:36.392 "base_bdevs_list": [ 00:13:36.392 { 00:13:36.392 "name": "BaseBdev1", 00:13:36.392 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:36.392 "is_configured": true, 00:13:36.392 "data_offset": 2048, 00:13:36.392 "data_size": 63488 00:13:36.392 }, 00:13:36.392 { 00:13:36.392 "name": "BaseBdev2", 00:13:36.392 "uuid": "db4d9707-f0cb-4480-b35a-f069aaf86231", 00:13:36.392 "is_configured": true, 00:13:36.392 "data_offset": 2048, 00:13:36.392 "data_size": 63488 00:13:36.392 }, 00:13:36.392 { 00:13:36.392 "name": "BaseBdev3", 00:13:36.392 "uuid": "dbdcfa27-b0b7-44c2-b57b-f74392d1fab5", 00:13:36.392 "is_configured": true, 00:13:36.392 "data_offset": 2048, 00:13:36.392 "data_size": 63488 00:13:36.392 }, 00:13:36.392 { 00:13:36.392 "name": "BaseBdev4", 00:13:36.392 "uuid": "54f279de-7e1d-4665-88ff-e2c17b6b25ab", 00:13:36.392 "is_configured": true, 00:13:36.392 "data_offset": 2048, 00:13:36.392 "data_size": 63488 00:13:36.392 } 00:13:36.392 ] 00:13:36.392 }' 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.392 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.651 [2024-11-26 06:23:20.741449] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.651 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.910 "name": "Existed_Raid", 00:13:36.910 "aliases": [ 00:13:36.910 "46ab9dd2-5183-40de-9814-8f2663f3eeb3" 00:13:36.910 ], 00:13:36.910 "product_name": "Raid Volume", 00:13:36.910 "block_size": 512, 00:13:36.910 "num_blocks": 253952, 00:13:36.910 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:36.910 "assigned_rate_limits": { 00:13:36.910 "rw_ios_per_sec": 0, 00:13:36.910 "rw_mbytes_per_sec": 0, 00:13:36.910 "r_mbytes_per_sec": 0, 00:13:36.910 "w_mbytes_per_sec": 0 00:13:36.910 }, 00:13:36.910 "claimed": false, 00:13:36.910 "zoned": false, 00:13:36.910 "supported_io_types": { 00:13:36.910 "read": true, 00:13:36.910 "write": true, 00:13:36.910 "unmap": true, 00:13:36.910 "flush": true, 00:13:36.910 "reset": true, 00:13:36.910 "nvme_admin": false, 00:13:36.910 "nvme_io": false, 00:13:36.910 "nvme_io_md": false, 00:13:36.910 "write_zeroes": true, 00:13:36.910 "zcopy": false, 00:13:36.910 "get_zone_info": false, 00:13:36.910 "zone_management": false, 00:13:36.910 "zone_append": false, 00:13:36.910 "compare": false, 00:13:36.910 "compare_and_write": false, 00:13:36.910 "abort": false, 00:13:36.910 "seek_hole": false, 00:13:36.910 "seek_data": false, 00:13:36.910 "copy": false, 00:13:36.910 "nvme_iov_md": false 00:13:36.910 }, 00:13:36.910 "memory_domains": [ 00:13:36.910 { 00:13:36.910 "dma_device_id": "system", 00:13:36.910 "dma_device_type": 1 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.910 "dma_device_type": 2 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "system", 00:13:36.910 "dma_device_type": 1 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.910 "dma_device_type": 2 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "system", 00:13:36.910 "dma_device_type": 1 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.910 "dma_device_type": 2 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "system", 00:13:36.910 "dma_device_type": 1 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.910 "dma_device_type": 2 00:13:36.910 } 00:13:36.910 ], 00:13:36.910 "driver_specific": { 00:13:36.910 "raid": { 00:13:36.910 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:36.910 "strip_size_kb": 64, 00:13:36.910 "state": "online", 00:13:36.910 "raid_level": "raid0", 00:13:36.910 "superblock": true, 00:13:36.910 "num_base_bdevs": 4, 00:13:36.910 "num_base_bdevs_discovered": 4, 00:13:36.910 "num_base_bdevs_operational": 4, 00:13:36.910 "base_bdevs_list": [ 00:13:36.910 { 00:13:36.910 "name": "BaseBdev1", 00:13:36.910 "uuid": "415f803c-796f-4efb-a8f8-31ddc947c7dd", 00:13:36.910 "is_configured": true, 00:13:36.910 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "name": "BaseBdev2", 00:13:36.910 "uuid": "db4d9707-f0cb-4480-b35a-f069aaf86231", 00:13:36.910 "is_configured": true, 00:13:36.910 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "name": "BaseBdev3", 00:13:36.910 "uuid": "dbdcfa27-b0b7-44c2-b57b-f74392d1fab5", 00:13:36.910 "is_configured": true, 00:13:36.910 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 }, 00:13:36.910 { 00:13:36.910 "name": "BaseBdev4", 00:13:36.910 "uuid": "54f279de-7e1d-4665-88ff-e2c17b6b25ab", 00:13:36.910 "is_configured": true, 00:13:36.910 "data_offset": 2048, 00:13:36.910 "data_size": 63488 00:13:36.910 } 00:13:36.910 ] 00:13:36.910 } 00:13:36.910 } 00:13:36.910 }' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:36.910 BaseBdev2 00:13:36.910 BaseBdev3 00:13:36.910 BaseBdev4' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.910 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.911 06:23:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.911 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.171 [2024-11-26 06:23:21.072561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.171 [2024-11-26 06:23:21.072614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.171 [2024-11-26 06:23:21.072675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.171 "name": "Existed_Raid", 00:13:37.171 "uuid": "46ab9dd2-5183-40de-9814-8f2663f3eeb3", 00:13:37.171 "strip_size_kb": 64, 00:13:37.171 "state": "offline", 00:13:37.171 "raid_level": "raid0", 00:13:37.171 "superblock": true, 00:13:37.171 "num_base_bdevs": 4, 00:13:37.171 "num_base_bdevs_discovered": 3, 00:13:37.171 "num_base_bdevs_operational": 3, 00:13:37.171 "base_bdevs_list": [ 00:13:37.171 { 00:13:37.171 "name": null, 00:13:37.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.171 "is_configured": false, 00:13:37.171 "data_offset": 0, 00:13:37.171 "data_size": 63488 00:13:37.171 }, 00:13:37.171 { 00:13:37.171 "name": "BaseBdev2", 00:13:37.171 "uuid": "db4d9707-f0cb-4480-b35a-f069aaf86231", 00:13:37.171 "is_configured": true, 00:13:37.171 "data_offset": 2048, 00:13:37.171 "data_size": 63488 00:13:37.171 }, 00:13:37.171 { 00:13:37.171 "name": "BaseBdev3", 00:13:37.171 "uuid": "dbdcfa27-b0b7-44c2-b57b-f74392d1fab5", 00:13:37.171 "is_configured": true, 00:13:37.171 "data_offset": 2048, 00:13:37.171 "data_size": 63488 00:13:37.171 }, 00:13:37.171 { 00:13:37.171 "name": "BaseBdev4", 00:13:37.171 "uuid": "54f279de-7e1d-4665-88ff-e2c17b6b25ab", 00:13:37.171 "is_configured": true, 00:13:37.171 "data_offset": 2048, 00:13:37.171 "data_size": 63488 00:13:37.171 } 00:13:37.171 ] 00:13:37.171 }' 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.171 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.739 [2024-11-26 06:23:21.629918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.739 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.739 [2024-11-26 06:23:21.803016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.997 06:23:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.997 [2024-11-26 06:23:21.964493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:37.997 [2024-11-26 06:23:21.964568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.997 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.256 BaseBdev2 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.256 [ 00:13:38.256 { 00:13:38.256 "name": "BaseBdev2", 00:13:38.256 "aliases": [ 00:13:38.256 "687ea3e6-b392-4329-ba09-0315102ecb85" 00:13:38.256 ], 00:13:38.256 "product_name": "Malloc disk", 00:13:38.256 "block_size": 512, 00:13:38.256 "num_blocks": 65536, 00:13:38.256 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:38.256 "assigned_rate_limits": { 00:13:38.256 "rw_ios_per_sec": 0, 00:13:38.256 "rw_mbytes_per_sec": 0, 00:13:38.256 "r_mbytes_per_sec": 0, 00:13:38.256 "w_mbytes_per_sec": 0 00:13:38.256 }, 00:13:38.256 "claimed": false, 00:13:38.256 "zoned": false, 00:13:38.256 "supported_io_types": { 00:13:38.256 "read": true, 00:13:38.256 "write": true, 00:13:38.256 "unmap": true, 00:13:38.256 "flush": true, 00:13:38.256 "reset": true, 00:13:38.256 "nvme_admin": false, 00:13:38.256 "nvme_io": false, 00:13:38.256 "nvme_io_md": false, 00:13:38.256 "write_zeroes": true, 00:13:38.256 "zcopy": true, 00:13:38.256 "get_zone_info": false, 00:13:38.256 "zone_management": false, 00:13:38.256 "zone_append": false, 00:13:38.256 "compare": false, 00:13:38.256 "compare_and_write": false, 00:13:38.256 "abort": true, 00:13:38.256 "seek_hole": false, 00:13:38.256 "seek_data": false, 00:13:38.256 "copy": true, 00:13:38.256 "nvme_iov_md": false 00:13:38.256 }, 00:13:38.256 "memory_domains": [ 00:13:38.256 { 00:13:38.256 "dma_device_id": "system", 00:13:38.256 "dma_device_type": 1 00:13:38.256 }, 00:13:38.256 { 00:13:38.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.256 "dma_device_type": 2 00:13:38.256 } 00:13:38.256 ], 00:13:38.256 "driver_specific": {} 00:13:38.256 } 00:13:38.256 ] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.256 BaseBdev3 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.256 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.256 [ 00:13:38.256 { 00:13:38.256 "name": "BaseBdev3", 00:13:38.256 "aliases": [ 00:13:38.256 "c0d8dc1b-fc8e-4607-a313-c68db2822c09" 00:13:38.256 ], 00:13:38.256 "product_name": "Malloc disk", 00:13:38.256 "block_size": 512, 00:13:38.256 "num_blocks": 65536, 00:13:38.256 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:38.256 "assigned_rate_limits": { 00:13:38.256 "rw_ios_per_sec": 0, 00:13:38.256 "rw_mbytes_per_sec": 0, 00:13:38.256 "r_mbytes_per_sec": 0, 00:13:38.256 "w_mbytes_per_sec": 0 00:13:38.256 }, 00:13:38.256 "claimed": false, 00:13:38.256 "zoned": false, 00:13:38.256 "supported_io_types": { 00:13:38.256 "read": true, 00:13:38.256 "write": true, 00:13:38.256 "unmap": true, 00:13:38.256 "flush": true, 00:13:38.256 "reset": true, 00:13:38.256 "nvme_admin": false, 00:13:38.256 "nvme_io": false, 00:13:38.256 "nvme_io_md": false, 00:13:38.256 "write_zeroes": true, 00:13:38.256 "zcopy": true, 00:13:38.256 "get_zone_info": false, 00:13:38.257 "zone_management": false, 00:13:38.257 "zone_append": false, 00:13:38.257 "compare": false, 00:13:38.257 "compare_and_write": false, 00:13:38.257 "abort": true, 00:13:38.257 "seek_hole": false, 00:13:38.257 "seek_data": false, 00:13:38.257 "copy": true, 00:13:38.257 "nvme_iov_md": false 00:13:38.257 }, 00:13:38.257 "memory_domains": [ 00:13:38.257 { 00:13:38.257 "dma_device_id": "system", 00:13:38.257 "dma_device_type": 1 00:13:38.257 }, 00:13:38.257 { 00:13:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.257 "dma_device_type": 2 00:13:38.257 } 00:13:38.257 ], 00:13:38.257 "driver_specific": {} 00:13:38.257 } 00:13:38.257 ] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 BaseBdev4 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 [ 00:13:38.257 { 00:13:38.257 "name": "BaseBdev4", 00:13:38.257 "aliases": [ 00:13:38.257 "0fb5cc58-8952-4127-8468-b8a270153e26" 00:13:38.257 ], 00:13:38.257 "product_name": "Malloc disk", 00:13:38.257 "block_size": 512, 00:13:38.257 "num_blocks": 65536, 00:13:38.257 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:38.257 "assigned_rate_limits": { 00:13:38.257 "rw_ios_per_sec": 0, 00:13:38.257 "rw_mbytes_per_sec": 0, 00:13:38.257 "r_mbytes_per_sec": 0, 00:13:38.257 "w_mbytes_per_sec": 0 00:13:38.257 }, 00:13:38.257 "claimed": false, 00:13:38.257 "zoned": false, 00:13:38.257 "supported_io_types": { 00:13:38.257 "read": true, 00:13:38.257 "write": true, 00:13:38.257 "unmap": true, 00:13:38.257 "flush": true, 00:13:38.257 "reset": true, 00:13:38.257 "nvme_admin": false, 00:13:38.257 "nvme_io": false, 00:13:38.257 "nvme_io_md": false, 00:13:38.257 "write_zeroes": true, 00:13:38.257 "zcopy": true, 00:13:38.257 "get_zone_info": false, 00:13:38.257 "zone_management": false, 00:13:38.257 "zone_append": false, 00:13:38.257 "compare": false, 00:13:38.257 "compare_and_write": false, 00:13:38.257 "abort": true, 00:13:38.257 "seek_hole": false, 00:13:38.257 "seek_data": false, 00:13:38.257 "copy": true, 00:13:38.257 "nvme_iov_md": false 00:13:38.257 }, 00:13:38.257 "memory_domains": [ 00:13:38.257 { 00:13:38.257 "dma_device_id": "system", 00:13:38.257 "dma_device_type": 1 00:13:38.257 }, 00:13:38.257 { 00:13:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.257 "dma_device_type": 2 00:13:38.257 } 00:13:38.257 ], 00:13:38.257 "driver_specific": {} 00:13:38.257 } 00:13:38.257 ] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.257 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.257 [2024-11-26 06:23:22.383881] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.257 [2024-11-26 06:23:22.383960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.257 [2024-11-26 06:23:22.383988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.257 [2024-11-26 06:23:22.386162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.257 [2024-11-26 06:23:22.386221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.515 "name": "Existed_Raid", 00:13:38.515 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:38.515 "strip_size_kb": 64, 00:13:38.515 "state": "configuring", 00:13:38.515 "raid_level": "raid0", 00:13:38.515 "superblock": true, 00:13:38.515 "num_base_bdevs": 4, 00:13:38.515 "num_base_bdevs_discovered": 3, 00:13:38.515 "num_base_bdevs_operational": 4, 00:13:38.515 "base_bdevs_list": [ 00:13:38.515 { 00:13:38.515 "name": "BaseBdev1", 00:13:38.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.515 "is_configured": false, 00:13:38.515 "data_offset": 0, 00:13:38.515 "data_size": 0 00:13:38.515 }, 00:13:38.515 { 00:13:38.515 "name": "BaseBdev2", 00:13:38.515 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:38.515 "is_configured": true, 00:13:38.515 "data_offset": 2048, 00:13:38.515 "data_size": 63488 00:13:38.515 }, 00:13:38.515 { 00:13:38.515 "name": "BaseBdev3", 00:13:38.515 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:38.515 "is_configured": true, 00:13:38.515 "data_offset": 2048, 00:13:38.515 "data_size": 63488 00:13:38.515 }, 00:13:38.515 { 00:13:38.515 "name": "BaseBdev4", 00:13:38.515 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:38.515 "is_configured": true, 00:13:38.515 "data_offset": 2048, 00:13:38.515 "data_size": 63488 00:13:38.515 } 00:13:38.515 ] 00:13:38.515 }' 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.515 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.779 [2024-11-26 06:23:22.835163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.779 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.780 "name": "Existed_Raid", 00:13:38.780 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:38.780 "strip_size_kb": 64, 00:13:38.780 "state": "configuring", 00:13:38.780 "raid_level": "raid0", 00:13:38.780 "superblock": true, 00:13:38.780 "num_base_bdevs": 4, 00:13:38.780 "num_base_bdevs_discovered": 2, 00:13:38.780 "num_base_bdevs_operational": 4, 00:13:38.780 "base_bdevs_list": [ 00:13:38.780 { 00:13:38.780 "name": "BaseBdev1", 00:13:38.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.780 "is_configured": false, 00:13:38.780 "data_offset": 0, 00:13:38.780 "data_size": 0 00:13:38.780 }, 00:13:38.780 { 00:13:38.780 "name": null, 00:13:38.780 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:38.780 "is_configured": false, 00:13:38.780 "data_offset": 0, 00:13:38.780 "data_size": 63488 00:13:38.780 }, 00:13:38.780 { 00:13:38.780 "name": "BaseBdev3", 00:13:38.780 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:38.780 "is_configured": true, 00:13:38.780 "data_offset": 2048, 00:13:38.780 "data_size": 63488 00:13:38.780 }, 00:13:38.780 { 00:13:38.780 "name": "BaseBdev4", 00:13:38.780 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:38.780 "is_configured": true, 00:13:38.780 "data_offset": 2048, 00:13:38.780 "data_size": 63488 00:13:38.780 } 00:13:38.780 ] 00:13:38.780 }' 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.780 06:23:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.356 [2024-11-26 06:23:23.393664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.356 BaseBdev1 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:39.356 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.357 [ 00:13:39.357 { 00:13:39.357 "name": "BaseBdev1", 00:13:39.357 "aliases": [ 00:13:39.357 "e5754f52-51a3-44f7-af2d-4090e4ad6cbd" 00:13:39.357 ], 00:13:39.357 "product_name": "Malloc disk", 00:13:39.357 "block_size": 512, 00:13:39.357 "num_blocks": 65536, 00:13:39.357 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:39.357 "assigned_rate_limits": { 00:13:39.357 "rw_ios_per_sec": 0, 00:13:39.357 "rw_mbytes_per_sec": 0, 00:13:39.357 "r_mbytes_per_sec": 0, 00:13:39.357 "w_mbytes_per_sec": 0 00:13:39.357 }, 00:13:39.357 "claimed": true, 00:13:39.357 "claim_type": "exclusive_write", 00:13:39.357 "zoned": false, 00:13:39.357 "supported_io_types": { 00:13:39.357 "read": true, 00:13:39.357 "write": true, 00:13:39.357 "unmap": true, 00:13:39.357 "flush": true, 00:13:39.357 "reset": true, 00:13:39.357 "nvme_admin": false, 00:13:39.357 "nvme_io": false, 00:13:39.357 "nvme_io_md": false, 00:13:39.357 "write_zeroes": true, 00:13:39.357 "zcopy": true, 00:13:39.357 "get_zone_info": false, 00:13:39.357 "zone_management": false, 00:13:39.357 "zone_append": false, 00:13:39.357 "compare": false, 00:13:39.357 "compare_and_write": false, 00:13:39.357 "abort": true, 00:13:39.357 "seek_hole": false, 00:13:39.357 "seek_data": false, 00:13:39.357 "copy": true, 00:13:39.357 "nvme_iov_md": false 00:13:39.357 }, 00:13:39.357 "memory_domains": [ 00:13:39.357 { 00:13:39.357 "dma_device_id": "system", 00:13:39.357 "dma_device_type": 1 00:13:39.357 }, 00:13:39.357 { 00:13:39.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.357 "dma_device_type": 2 00:13:39.357 } 00:13:39.357 ], 00:13:39.357 "driver_specific": {} 00:13:39.357 } 00:13:39.357 ] 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.357 "name": "Existed_Raid", 00:13:39.357 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:39.357 "strip_size_kb": 64, 00:13:39.357 "state": "configuring", 00:13:39.357 "raid_level": "raid0", 00:13:39.357 "superblock": true, 00:13:39.357 "num_base_bdevs": 4, 00:13:39.357 "num_base_bdevs_discovered": 3, 00:13:39.357 "num_base_bdevs_operational": 4, 00:13:39.357 "base_bdevs_list": [ 00:13:39.357 { 00:13:39.357 "name": "BaseBdev1", 00:13:39.357 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:39.357 "is_configured": true, 00:13:39.357 "data_offset": 2048, 00:13:39.357 "data_size": 63488 00:13:39.357 }, 00:13:39.357 { 00:13:39.357 "name": null, 00:13:39.357 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:39.357 "is_configured": false, 00:13:39.357 "data_offset": 0, 00:13:39.357 "data_size": 63488 00:13:39.357 }, 00:13:39.357 { 00:13:39.357 "name": "BaseBdev3", 00:13:39.357 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:39.357 "is_configured": true, 00:13:39.357 "data_offset": 2048, 00:13:39.357 "data_size": 63488 00:13:39.357 }, 00:13:39.357 { 00:13:39.357 "name": "BaseBdev4", 00:13:39.357 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:39.357 "is_configured": true, 00:13:39.357 "data_offset": 2048, 00:13:39.357 "data_size": 63488 00:13:39.357 } 00:13:39.357 ] 00:13:39.357 }' 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.357 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.926 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.926 [2024-11-26 06:23:23.969282] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.927 06:23:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.927 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.927 "name": "Existed_Raid", 00:13:39.927 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:39.927 "strip_size_kb": 64, 00:13:39.927 "state": "configuring", 00:13:39.927 "raid_level": "raid0", 00:13:39.927 "superblock": true, 00:13:39.927 "num_base_bdevs": 4, 00:13:39.927 "num_base_bdevs_discovered": 2, 00:13:39.927 "num_base_bdevs_operational": 4, 00:13:39.927 "base_bdevs_list": [ 00:13:39.927 { 00:13:39.927 "name": "BaseBdev1", 00:13:39.927 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:39.927 "is_configured": true, 00:13:39.927 "data_offset": 2048, 00:13:39.927 "data_size": 63488 00:13:39.927 }, 00:13:39.927 { 00:13:39.927 "name": null, 00:13:39.927 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:39.927 "is_configured": false, 00:13:39.927 "data_offset": 0, 00:13:39.927 "data_size": 63488 00:13:39.927 }, 00:13:39.927 { 00:13:39.927 "name": null, 00:13:39.927 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:39.927 "is_configured": false, 00:13:39.927 "data_offset": 0, 00:13:39.927 "data_size": 63488 00:13:39.927 }, 00:13:39.927 { 00:13:39.927 "name": "BaseBdev4", 00:13:39.927 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:39.927 "is_configured": true, 00:13:39.927 "data_offset": 2048, 00:13:39.927 "data_size": 63488 00:13:39.927 } 00:13:39.927 ] 00:13:39.927 }' 00:13:39.927 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.927 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.496 [2024-11-26 06:23:24.448795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.496 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.497 "name": "Existed_Raid", 00:13:40.497 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:40.497 "strip_size_kb": 64, 00:13:40.497 "state": "configuring", 00:13:40.497 "raid_level": "raid0", 00:13:40.497 "superblock": true, 00:13:40.497 "num_base_bdevs": 4, 00:13:40.497 "num_base_bdevs_discovered": 3, 00:13:40.497 "num_base_bdevs_operational": 4, 00:13:40.497 "base_bdevs_list": [ 00:13:40.497 { 00:13:40.497 "name": "BaseBdev1", 00:13:40.497 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:40.497 "is_configured": true, 00:13:40.497 "data_offset": 2048, 00:13:40.497 "data_size": 63488 00:13:40.497 }, 00:13:40.497 { 00:13:40.497 "name": null, 00:13:40.497 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:40.497 "is_configured": false, 00:13:40.497 "data_offset": 0, 00:13:40.497 "data_size": 63488 00:13:40.497 }, 00:13:40.497 { 00:13:40.497 "name": "BaseBdev3", 00:13:40.497 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:40.497 "is_configured": true, 00:13:40.497 "data_offset": 2048, 00:13:40.497 "data_size": 63488 00:13:40.497 }, 00:13:40.497 { 00:13:40.497 "name": "BaseBdev4", 00:13:40.497 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:40.497 "is_configured": true, 00:13:40.497 "data_offset": 2048, 00:13:40.497 "data_size": 63488 00:13:40.497 } 00:13:40.497 ] 00:13:40.497 }' 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.497 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 06:23:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 [2024-11-26 06:23:24.952017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.066 "name": "Existed_Raid", 00:13:41.066 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:41.066 "strip_size_kb": 64, 00:13:41.066 "state": "configuring", 00:13:41.066 "raid_level": "raid0", 00:13:41.066 "superblock": true, 00:13:41.066 "num_base_bdevs": 4, 00:13:41.066 "num_base_bdevs_discovered": 2, 00:13:41.066 "num_base_bdevs_operational": 4, 00:13:41.066 "base_bdevs_list": [ 00:13:41.066 { 00:13:41.066 "name": null, 00:13:41.066 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:41.066 "is_configured": false, 00:13:41.066 "data_offset": 0, 00:13:41.066 "data_size": 63488 00:13:41.066 }, 00:13:41.066 { 00:13:41.066 "name": null, 00:13:41.066 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:41.066 "is_configured": false, 00:13:41.066 "data_offset": 0, 00:13:41.066 "data_size": 63488 00:13:41.066 }, 00:13:41.066 { 00:13:41.066 "name": "BaseBdev3", 00:13:41.066 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:41.066 "is_configured": true, 00:13:41.066 "data_offset": 2048, 00:13:41.066 "data_size": 63488 00:13:41.066 }, 00:13:41.066 { 00:13:41.066 "name": "BaseBdev4", 00:13:41.066 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:41.066 "is_configured": true, 00:13:41.066 "data_offset": 2048, 00:13:41.066 "data_size": 63488 00:13:41.066 } 00:13:41.066 ] 00:13:41.066 }' 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.066 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 [2024-11-26 06:23:25.562128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.635 "name": "Existed_Raid", 00:13:41.635 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:41.635 "strip_size_kb": 64, 00:13:41.635 "state": "configuring", 00:13:41.635 "raid_level": "raid0", 00:13:41.635 "superblock": true, 00:13:41.635 "num_base_bdevs": 4, 00:13:41.635 "num_base_bdevs_discovered": 3, 00:13:41.635 "num_base_bdevs_operational": 4, 00:13:41.635 "base_bdevs_list": [ 00:13:41.635 { 00:13:41.635 "name": null, 00:13:41.635 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:41.635 "is_configured": false, 00:13:41.635 "data_offset": 0, 00:13:41.635 "data_size": 63488 00:13:41.635 }, 00:13:41.635 { 00:13:41.635 "name": "BaseBdev2", 00:13:41.635 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:41.635 "is_configured": true, 00:13:41.635 "data_offset": 2048, 00:13:41.635 "data_size": 63488 00:13:41.635 }, 00:13:41.635 { 00:13:41.635 "name": "BaseBdev3", 00:13:41.635 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:41.635 "is_configured": true, 00:13:41.635 "data_offset": 2048, 00:13:41.635 "data_size": 63488 00:13:41.635 }, 00:13:41.635 { 00:13:41.635 "name": "BaseBdev4", 00:13:41.635 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:41.635 "is_configured": true, 00:13:41.635 "data_offset": 2048, 00:13:41.635 "data_size": 63488 00:13:41.635 } 00:13:41.635 ] 00:13:41.635 }' 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.635 06:23:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e5754f52-51a3-44f7-af2d-4090e4ad6cbd 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 [2024-11-26 06:23:26.173419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:42.205 [2024-11-26 06:23:26.173789] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:42.205 [2024-11-26 06:23:26.173809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:42.205 [2024-11-26 06:23:26.174256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:42.205 NewBaseBdev 00:13:42.205 [2024-11-26 06:23:26.174491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:42.205 [2024-11-26 06:23:26.174512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:42.205 [2024-11-26 06:23:26.174707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 [ 00:13:42.205 { 00:13:42.205 "name": "NewBaseBdev", 00:13:42.205 "aliases": [ 00:13:42.205 "e5754f52-51a3-44f7-af2d-4090e4ad6cbd" 00:13:42.205 ], 00:13:42.205 "product_name": "Malloc disk", 00:13:42.205 "block_size": 512, 00:13:42.205 "num_blocks": 65536, 00:13:42.205 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:42.205 "assigned_rate_limits": { 00:13:42.205 "rw_ios_per_sec": 0, 00:13:42.205 "rw_mbytes_per_sec": 0, 00:13:42.205 "r_mbytes_per_sec": 0, 00:13:42.205 "w_mbytes_per_sec": 0 00:13:42.205 }, 00:13:42.205 "claimed": true, 00:13:42.205 "claim_type": "exclusive_write", 00:13:42.205 "zoned": false, 00:13:42.205 "supported_io_types": { 00:13:42.205 "read": true, 00:13:42.205 "write": true, 00:13:42.205 "unmap": true, 00:13:42.205 "flush": true, 00:13:42.205 "reset": true, 00:13:42.205 "nvme_admin": false, 00:13:42.205 "nvme_io": false, 00:13:42.205 "nvme_io_md": false, 00:13:42.205 "write_zeroes": true, 00:13:42.205 "zcopy": true, 00:13:42.205 "get_zone_info": false, 00:13:42.205 "zone_management": false, 00:13:42.205 "zone_append": false, 00:13:42.205 "compare": false, 00:13:42.205 "compare_and_write": false, 00:13:42.205 "abort": true, 00:13:42.205 "seek_hole": false, 00:13:42.205 "seek_data": false, 00:13:42.205 "copy": true, 00:13:42.205 "nvme_iov_md": false 00:13:42.205 }, 00:13:42.205 "memory_domains": [ 00:13:42.205 { 00:13:42.205 "dma_device_id": "system", 00:13:42.205 "dma_device_type": 1 00:13:42.205 }, 00:13:42.205 { 00:13:42.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.205 "dma_device_type": 2 00:13:42.205 } 00:13:42.205 ], 00:13:42.205 "driver_specific": {} 00:13:42.205 } 00:13:42.205 ] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.205 "name": "Existed_Raid", 00:13:42.205 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:42.205 "strip_size_kb": 64, 00:13:42.205 "state": "online", 00:13:42.205 "raid_level": "raid0", 00:13:42.205 "superblock": true, 00:13:42.205 "num_base_bdevs": 4, 00:13:42.205 "num_base_bdevs_discovered": 4, 00:13:42.205 "num_base_bdevs_operational": 4, 00:13:42.205 "base_bdevs_list": [ 00:13:42.205 { 00:13:42.205 "name": "NewBaseBdev", 00:13:42.205 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:42.205 "is_configured": true, 00:13:42.205 "data_offset": 2048, 00:13:42.205 "data_size": 63488 00:13:42.205 }, 00:13:42.205 { 00:13:42.205 "name": "BaseBdev2", 00:13:42.205 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:42.205 "is_configured": true, 00:13:42.205 "data_offset": 2048, 00:13:42.205 "data_size": 63488 00:13:42.205 }, 00:13:42.205 { 00:13:42.205 "name": "BaseBdev3", 00:13:42.205 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:42.205 "is_configured": true, 00:13:42.205 "data_offset": 2048, 00:13:42.205 "data_size": 63488 00:13:42.205 }, 00:13:42.205 { 00:13:42.205 "name": "BaseBdev4", 00:13:42.205 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:42.205 "is_configured": true, 00:13:42.205 "data_offset": 2048, 00:13:42.205 "data_size": 63488 00:13:42.205 } 00:13:42.205 ] 00:13:42.205 }' 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.205 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.774 [2024-11-26 06:23:26.685240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.774 "name": "Existed_Raid", 00:13:42.774 "aliases": [ 00:13:42.774 "9e79583a-e326-475c-b570-106042ec5334" 00:13:42.774 ], 00:13:42.774 "product_name": "Raid Volume", 00:13:42.774 "block_size": 512, 00:13:42.774 "num_blocks": 253952, 00:13:42.774 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:42.774 "assigned_rate_limits": { 00:13:42.774 "rw_ios_per_sec": 0, 00:13:42.774 "rw_mbytes_per_sec": 0, 00:13:42.774 "r_mbytes_per_sec": 0, 00:13:42.774 "w_mbytes_per_sec": 0 00:13:42.774 }, 00:13:42.774 "claimed": false, 00:13:42.774 "zoned": false, 00:13:42.774 "supported_io_types": { 00:13:42.774 "read": true, 00:13:42.774 "write": true, 00:13:42.774 "unmap": true, 00:13:42.774 "flush": true, 00:13:42.774 "reset": true, 00:13:42.774 "nvme_admin": false, 00:13:42.774 "nvme_io": false, 00:13:42.774 "nvme_io_md": false, 00:13:42.774 "write_zeroes": true, 00:13:42.774 "zcopy": false, 00:13:42.774 "get_zone_info": false, 00:13:42.774 "zone_management": false, 00:13:42.774 "zone_append": false, 00:13:42.774 "compare": false, 00:13:42.774 "compare_and_write": false, 00:13:42.774 "abort": false, 00:13:42.774 "seek_hole": false, 00:13:42.774 "seek_data": false, 00:13:42.774 "copy": false, 00:13:42.774 "nvme_iov_md": false 00:13:42.774 }, 00:13:42.774 "memory_domains": [ 00:13:42.774 { 00:13:42.774 "dma_device_id": "system", 00:13:42.774 "dma_device_type": 1 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.774 "dma_device_type": 2 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "system", 00:13:42.774 "dma_device_type": 1 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.774 "dma_device_type": 2 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "system", 00:13:42.774 "dma_device_type": 1 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.774 "dma_device_type": 2 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "system", 00:13:42.774 "dma_device_type": 1 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.774 "dma_device_type": 2 00:13:42.774 } 00:13:42.774 ], 00:13:42.774 "driver_specific": { 00:13:42.774 "raid": { 00:13:42.774 "uuid": "9e79583a-e326-475c-b570-106042ec5334", 00:13:42.774 "strip_size_kb": 64, 00:13:42.774 "state": "online", 00:13:42.774 "raid_level": "raid0", 00:13:42.774 "superblock": true, 00:13:42.774 "num_base_bdevs": 4, 00:13:42.774 "num_base_bdevs_discovered": 4, 00:13:42.774 "num_base_bdevs_operational": 4, 00:13:42.774 "base_bdevs_list": [ 00:13:42.774 { 00:13:42.774 "name": "NewBaseBdev", 00:13:42.774 "uuid": "e5754f52-51a3-44f7-af2d-4090e4ad6cbd", 00:13:42.774 "is_configured": true, 00:13:42.774 "data_offset": 2048, 00:13:42.774 "data_size": 63488 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "name": "BaseBdev2", 00:13:42.774 "uuid": "687ea3e6-b392-4329-ba09-0315102ecb85", 00:13:42.774 "is_configured": true, 00:13:42.774 "data_offset": 2048, 00:13:42.774 "data_size": 63488 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "name": "BaseBdev3", 00:13:42.774 "uuid": "c0d8dc1b-fc8e-4607-a313-c68db2822c09", 00:13:42.774 "is_configured": true, 00:13:42.774 "data_offset": 2048, 00:13:42.774 "data_size": 63488 00:13:42.774 }, 00:13:42.774 { 00:13:42.774 "name": "BaseBdev4", 00:13:42.774 "uuid": "0fb5cc58-8952-4127-8468-b8a270153e26", 00:13:42.774 "is_configured": true, 00:13:42.774 "data_offset": 2048, 00:13:42.774 "data_size": 63488 00:13:42.774 } 00:13:42.774 ] 00:13:42.774 } 00:13:42.774 } 00:13:42.774 }' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:42.774 BaseBdev2 00:13:42.774 BaseBdev3 00:13:42.774 BaseBdev4' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.774 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.034 [2024-11-26 06:23:26.972348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.034 [2024-11-26 06:23:26.972416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.034 [2024-11-26 06:23:26.972581] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.034 [2024-11-26 06:23:26.972728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.034 [2024-11-26 06:23:26.972792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70521 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70521 ']' 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70521 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.034 06:23:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70521 00:13:43.034 06:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.034 06:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.034 killing process with pid 70521 00:13:43.034 06:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70521' 00:13:43.034 06:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70521 00:13:43.034 [2024-11-26 06:23:27.012705] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.034 06:23:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70521 00:13:43.601 [2024-11-26 06:23:27.476007] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.978 06:23:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:44.978 00:13:44.978 real 0m12.221s 00:13:44.978 user 0m18.884s 00:13:44.978 sys 0m2.344s 00:13:44.978 06:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.978 ************************************ 00:13:44.978 END TEST raid_state_function_test_sb 00:13:44.978 ************************************ 00:13:44.978 06:23:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.978 06:23:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:44.978 06:23:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:44.978 06:23:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.978 06:23:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.978 ************************************ 00:13:44.978 START TEST raid_superblock_test 00:13:44.978 ************************************ 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71197 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71197 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71197 ']' 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.978 06:23:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.978 [2024-11-26 06:23:29.004185] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:44.978 [2024-11-26 06:23:29.004371] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71197 ] 00:13:45.237 [2024-11-26 06:23:29.188387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.237 [2024-11-26 06:23:29.347220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.497 [2024-11-26 06:23:29.603977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.497 [2024-11-26 06:23:29.604045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.757 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 malloc1 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 [2024-11-26 06:23:29.941195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:46.017 [2024-11-26 06:23:29.941301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.017 [2024-11-26 06:23:29.941334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:46.017 [2024-11-26 06:23:29.941345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.017 [2024-11-26 06:23:29.944497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.017 [2024-11-26 06:23:29.944552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:46.017 pt1 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.017 06:23:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.017 malloc2 00:13:46.017 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.017 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:46.017 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 [2024-11-26 06:23:30.008817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:46.018 [2024-11-26 06:23:30.008896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.018 [2024-11-26 06:23:30.008924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:46.018 [2024-11-26 06:23:30.008936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.018 [2024-11-26 06:23:30.011752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.018 [2024-11-26 06:23:30.011798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:46.018 pt2 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 malloc3 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.018 [2024-11-26 06:23:30.088864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:46.018 [2024-11-26 06:23:30.088940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.018 [2024-11-26 06:23:30.088967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:46.018 [2024-11-26 06:23:30.088978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.018 [2024-11-26 06:23:30.091654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.018 [2024-11-26 06:23:30.091692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:46.018 pt3 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.018 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.278 malloc4 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.278 [2024-11-26 06:23:30.156176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:46.278 [2024-11-26 06:23:30.156260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.278 [2024-11-26 06:23:30.156292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:46.278 [2024-11-26 06:23:30.156307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.278 [2024-11-26 06:23:30.159096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.278 [2024-11-26 06:23:30.159130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:46.278 pt4 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.278 [2024-11-26 06:23:30.172195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:46.278 [2024-11-26 06:23:30.174662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:46.278 [2024-11-26 06:23:30.174747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:46.278 [2024-11-26 06:23:30.174837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:46.278 [2024-11-26 06:23:30.175119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:46.278 [2024-11-26 06:23:30.175145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:46.278 [2024-11-26 06:23:30.175552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:46.278 [2024-11-26 06:23:30.175802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:46.278 [2024-11-26 06:23:30.175831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:46.278 [2024-11-26 06:23:30.176195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.278 "name": "raid_bdev1", 00:13:46.278 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:46.278 "strip_size_kb": 64, 00:13:46.278 "state": "online", 00:13:46.278 "raid_level": "raid0", 00:13:46.278 "superblock": true, 00:13:46.278 "num_base_bdevs": 4, 00:13:46.278 "num_base_bdevs_discovered": 4, 00:13:46.278 "num_base_bdevs_operational": 4, 00:13:46.278 "base_bdevs_list": [ 00:13:46.278 { 00:13:46.278 "name": "pt1", 00:13:46.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.278 "is_configured": true, 00:13:46.278 "data_offset": 2048, 00:13:46.278 "data_size": 63488 00:13:46.278 }, 00:13:46.278 { 00:13:46.278 "name": "pt2", 00:13:46.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.278 "is_configured": true, 00:13:46.278 "data_offset": 2048, 00:13:46.278 "data_size": 63488 00:13:46.278 }, 00:13:46.278 { 00:13:46.278 "name": "pt3", 00:13:46.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.278 "is_configured": true, 00:13:46.278 "data_offset": 2048, 00:13:46.278 "data_size": 63488 00:13:46.278 }, 00:13:46.278 { 00:13:46.278 "name": "pt4", 00:13:46.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.278 "is_configured": true, 00:13:46.278 "data_offset": 2048, 00:13:46.278 "data_size": 63488 00:13:46.278 } 00:13:46.278 ] 00:13:46.278 }' 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.278 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.845 [2024-11-26 06:23:30.707837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.845 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.845 "name": "raid_bdev1", 00:13:46.845 "aliases": [ 00:13:46.845 "4c869631-0a14-4e13-8077-136f0613d28a" 00:13:46.845 ], 00:13:46.845 "product_name": "Raid Volume", 00:13:46.845 "block_size": 512, 00:13:46.846 "num_blocks": 253952, 00:13:46.846 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:46.846 "assigned_rate_limits": { 00:13:46.846 "rw_ios_per_sec": 0, 00:13:46.846 "rw_mbytes_per_sec": 0, 00:13:46.846 "r_mbytes_per_sec": 0, 00:13:46.846 "w_mbytes_per_sec": 0 00:13:46.846 }, 00:13:46.846 "claimed": false, 00:13:46.846 "zoned": false, 00:13:46.846 "supported_io_types": { 00:13:46.846 "read": true, 00:13:46.846 "write": true, 00:13:46.846 "unmap": true, 00:13:46.846 "flush": true, 00:13:46.846 "reset": true, 00:13:46.846 "nvme_admin": false, 00:13:46.846 "nvme_io": false, 00:13:46.846 "nvme_io_md": false, 00:13:46.846 "write_zeroes": true, 00:13:46.846 "zcopy": false, 00:13:46.846 "get_zone_info": false, 00:13:46.846 "zone_management": false, 00:13:46.846 "zone_append": false, 00:13:46.846 "compare": false, 00:13:46.846 "compare_and_write": false, 00:13:46.846 "abort": false, 00:13:46.846 "seek_hole": false, 00:13:46.846 "seek_data": false, 00:13:46.846 "copy": false, 00:13:46.846 "nvme_iov_md": false 00:13:46.846 }, 00:13:46.846 "memory_domains": [ 00:13:46.846 { 00:13:46.846 "dma_device_id": "system", 00:13:46.846 "dma_device_type": 1 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.846 "dma_device_type": 2 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "system", 00:13:46.846 "dma_device_type": 1 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.846 "dma_device_type": 2 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "system", 00:13:46.846 "dma_device_type": 1 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.846 "dma_device_type": 2 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "system", 00:13:46.846 "dma_device_type": 1 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.846 "dma_device_type": 2 00:13:46.846 } 00:13:46.846 ], 00:13:46.846 "driver_specific": { 00:13:46.846 "raid": { 00:13:46.846 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:46.846 "strip_size_kb": 64, 00:13:46.846 "state": "online", 00:13:46.846 "raid_level": "raid0", 00:13:46.846 "superblock": true, 00:13:46.846 "num_base_bdevs": 4, 00:13:46.846 "num_base_bdevs_discovered": 4, 00:13:46.846 "num_base_bdevs_operational": 4, 00:13:46.846 "base_bdevs_list": [ 00:13:46.846 { 00:13:46.846 "name": "pt1", 00:13:46.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.846 "is_configured": true, 00:13:46.846 "data_offset": 2048, 00:13:46.846 "data_size": 63488 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "name": "pt2", 00:13:46.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.846 "is_configured": true, 00:13:46.846 "data_offset": 2048, 00:13:46.846 "data_size": 63488 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "name": "pt3", 00:13:46.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.846 "is_configured": true, 00:13:46.846 "data_offset": 2048, 00:13:46.846 "data_size": 63488 00:13:46.846 }, 00:13:46.846 { 00:13:46.846 "name": "pt4", 00:13:46.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.846 "is_configured": true, 00:13:46.846 "data_offset": 2048, 00:13:46.846 "data_size": 63488 00:13:46.846 } 00:13:46.846 ] 00:13:46.846 } 00:13:46.846 } 00:13:46.846 }' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:46.846 pt2 00:13:46.846 pt3 00:13:46.846 pt4' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.846 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.105 06:23:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.105 [2024-11-26 06:23:31.019334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4c869631-0a14-4e13-8077-136f0613d28a 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4c869631-0a14-4e13-8077-136f0613d28a ']' 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.105 [2024-11-26 06:23:31.050840] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.105 [2024-11-26 06:23:31.050879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.105 [2024-11-26 06:23:31.051016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.105 [2024-11-26 06:23:31.051125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.105 [2024-11-26 06:23:31.051148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.105 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.106 [2024-11-26 06:23:31.190653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:47.106 [2024-11-26 06:23:31.193122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:47.106 [2024-11-26 06:23:31.193184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:47.106 [2024-11-26 06:23:31.193221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:47.106 [2024-11-26 06:23:31.193284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:47.106 [2024-11-26 06:23:31.193344] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:47.106 [2024-11-26 06:23:31.193365] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:47.106 [2024-11-26 06:23:31.193385] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:47.106 [2024-11-26 06:23:31.193399] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.106 [2024-11-26 06:23:31.193413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:47.106 request: 00:13:47.106 { 00:13:47.106 "name": "raid_bdev1", 00:13:47.106 "raid_level": "raid0", 00:13:47.106 "base_bdevs": [ 00:13:47.106 "malloc1", 00:13:47.106 "malloc2", 00:13:47.106 "malloc3", 00:13:47.106 "malloc4" 00:13:47.106 ], 00:13:47.106 "strip_size_kb": 64, 00:13:47.106 "superblock": false, 00:13:47.106 "method": "bdev_raid_create", 00:13:47.106 "req_id": 1 00:13:47.106 } 00:13:47.106 Got JSON-RPC error response 00:13:47.106 response: 00:13:47.106 { 00:13:47.106 "code": -17, 00:13:47.106 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:47.106 } 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.106 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.366 [2024-11-26 06:23:31.246482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.366 [2024-11-26 06:23:31.246572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.366 [2024-11-26 06:23:31.246594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:47.366 [2024-11-26 06:23:31.246606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.366 [2024-11-26 06:23:31.249485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.366 [2024-11-26 06:23:31.249532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.366 [2024-11-26 06:23:31.249632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:47.366 [2024-11-26 06:23:31.249706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.366 pt1 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.366 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.367 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.367 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.367 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.367 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.367 "name": "raid_bdev1", 00:13:47.367 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:47.367 "strip_size_kb": 64, 00:13:47.367 "state": "configuring", 00:13:47.367 "raid_level": "raid0", 00:13:47.367 "superblock": true, 00:13:47.367 "num_base_bdevs": 4, 00:13:47.367 "num_base_bdevs_discovered": 1, 00:13:47.367 "num_base_bdevs_operational": 4, 00:13:47.367 "base_bdevs_list": [ 00:13:47.367 { 00:13:47.367 "name": "pt1", 00:13:47.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.367 "is_configured": true, 00:13:47.367 "data_offset": 2048, 00:13:47.367 "data_size": 63488 00:13:47.367 }, 00:13:47.367 { 00:13:47.367 "name": null, 00:13:47.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.367 "is_configured": false, 00:13:47.367 "data_offset": 2048, 00:13:47.367 "data_size": 63488 00:13:47.367 }, 00:13:47.367 { 00:13:47.367 "name": null, 00:13:47.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.367 "is_configured": false, 00:13:47.367 "data_offset": 2048, 00:13:47.367 "data_size": 63488 00:13:47.367 }, 00:13:47.367 { 00:13:47.367 "name": null, 00:13:47.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.367 "is_configured": false, 00:13:47.367 "data_offset": 2048, 00:13:47.367 "data_size": 63488 00:13:47.367 } 00:13:47.367 ] 00:13:47.367 }' 00:13:47.367 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.367 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.626 [2024-11-26 06:23:31.721770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:47.626 [2024-11-26 06:23:31.721882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.626 [2024-11-26 06:23:31.721910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.626 [2024-11-26 06:23:31.721925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.626 [2024-11-26 06:23:31.722580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.626 [2024-11-26 06:23:31.722620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:47.626 [2024-11-26 06:23:31.722741] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:47.626 [2024-11-26 06:23:31.722775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:47.626 pt2 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.626 [2024-11-26 06:23:31.733775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.626 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.884 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.884 "name": "raid_bdev1", 00:13:47.884 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:47.884 "strip_size_kb": 64, 00:13:47.884 "state": "configuring", 00:13:47.884 "raid_level": "raid0", 00:13:47.884 "superblock": true, 00:13:47.884 "num_base_bdevs": 4, 00:13:47.884 "num_base_bdevs_discovered": 1, 00:13:47.884 "num_base_bdevs_operational": 4, 00:13:47.884 "base_bdevs_list": [ 00:13:47.884 { 00:13:47.884 "name": "pt1", 00:13:47.884 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.885 "is_configured": true, 00:13:47.885 "data_offset": 2048, 00:13:47.885 "data_size": 63488 00:13:47.885 }, 00:13:47.885 { 00:13:47.885 "name": null, 00:13:47.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.885 "is_configured": false, 00:13:47.885 "data_offset": 0, 00:13:47.885 "data_size": 63488 00:13:47.885 }, 00:13:47.885 { 00:13:47.885 "name": null, 00:13:47.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.885 "is_configured": false, 00:13:47.885 "data_offset": 2048, 00:13:47.885 "data_size": 63488 00:13:47.885 }, 00:13:47.885 { 00:13:47.885 "name": null, 00:13:47.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.885 "is_configured": false, 00:13:47.885 "data_offset": 2048, 00:13:47.885 "data_size": 63488 00:13:47.885 } 00:13:47.885 ] 00:13:47.885 }' 00:13:47.885 06:23:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.885 06:23:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.143 [2024-11-26 06:23:32.196955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.143 [2024-11-26 06:23:32.197046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.143 [2024-11-26 06:23:32.197083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:48.143 [2024-11-26 06:23:32.197095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.143 [2024-11-26 06:23:32.197755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.143 [2024-11-26 06:23:32.197786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.143 [2024-11-26 06:23:32.197901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:48.143 [2024-11-26 06:23:32.197930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.143 pt2 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.143 [2024-11-26 06:23:32.204901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:48.143 [2024-11-26 06:23:32.204967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.143 [2024-11-26 06:23:32.204999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:48.143 [2024-11-26 06:23:32.205012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.143 [2024-11-26 06:23:32.205531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.143 [2024-11-26 06:23:32.205560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:48.143 [2024-11-26 06:23:32.205643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:48.143 [2024-11-26 06:23:32.205666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:48.143 pt3 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.143 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.143 [2024-11-26 06:23:32.212846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:48.143 [2024-11-26 06:23:32.212905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.143 [2024-11-26 06:23:32.212931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:48.143 [2024-11-26 06:23:32.212940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.143 [2024-11-26 06:23:32.213453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.143 [2024-11-26 06:23:32.213479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:48.143 [2024-11-26 06:23:32.213564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:48.143 [2024-11-26 06:23:32.213588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:48.143 [2024-11-26 06:23:32.213775] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:48.143 [2024-11-26 06:23:32.213785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:48.143 [2024-11-26 06:23:32.214121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:48.143 [2024-11-26 06:23:32.214328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:48.144 [2024-11-26 06:23:32.214352] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:48.144 [2024-11-26 06:23:32.214516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.144 pt4 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.144 "name": "raid_bdev1", 00:13:48.144 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:48.144 "strip_size_kb": 64, 00:13:48.144 "state": "online", 00:13:48.144 "raid_level": "raid0", 00:13:48.144 "superblock": true, 00:13:48.144 "num_base_bdevs": 4, 00:13:48.144 "num_base_bdevs_discovered": 4, 00:13:48.144 "num_base_bdevs_operational": 4, 00:13:48.144 "base_bdevs_list": [ 00:13:48.144 { 00:13:48.144 "name": "pt1", 00:13:48.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.144 "is_configured": true, 00:13:48.144 "data_offset": 2048, 00:13:48.144 "data_size": 63488 00:13:48.144 }, 00:13:48.144 { 00:13:48.144 "name": "pt2", 00:13:48.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.144 "is_configured": true, 00:13:48.144 "data_offset": 2048, 00:13:48.144 "data_size": 63488 00:13:48.144 }, 00:13:48.144 { 00:13:48.144 "name": "pt3", 00:13:48.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.144 "is_configured": true, 00:13:48.144 "data_offset": 2048, 00:13:48.144 "data_size": 63488 00:13:48.144 }, 00:13:48.144 { 00:13:48.144 "name": "pt4", 00:13:48.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.144 "is_configured": true, 00:13:48.144 "data_offset": 2048, 00:13:48.144 "data_size": 63488 00:13:48.144 } 00:13:48.144 ] 00:13:48.144 }' 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.144 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.757 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.758 [2024-11-26 06:23:32.688729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.758 "name": "raid_bdev1", 00:13:48.758 "aliases": [ 00:13:48.758 "4c869631-0a14-4e13-8077-136f0613d28a" 00:13:48.758 ], 00:13:48.758 "product_name": "Raid Volume", 00:13:48.758 "block_size": 512, 00:13:48.758 "num_blocks": 253952, 00:13:48.758 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:48.758 "assigned_rate_limits": { 00:13:48.758 "rw_ios_per_sec": 0, 00:13:48.758 "rw_mbytes_per_sec": 0, 00:13:48.758 "r_mbytes_per_sec": 0, 00:13:48.758 "w_mbytes_per_sec": 0 00:13:48.758 }, 00:13:48.758 "claimed": false, 00:13:48.758 "zoned": false, 00:13:48.758 "supported_io_types": { 00:13:48.758 "read": true, 00:13:48.758 "write": true, 00:13:48.758 "unmap": true, 00:13:48.758 "flush": true, 00:13:48.758 "reset": true, 00:13:48.758 "nvme_admin": false, 00:13:48.758 "nvme_io": false, 00:13:48.758 "nvme_io_md": false, 00:13:48.758 "write_zeroes": true, 00:13:48.758 "zcopy": false, 00:13:48.758 "get_zone_info": false, 00:13:48.758 "zone_management": false, 00:13:48.758 "zone_append": false, 00:13:48.758 "compare": false, 00:13:48.758 "compare_and_write": false, 00:13:48.758 "abort": false, 00:13:48.758 "seek_hole": false, 00:13:48.758 "seek_data": false, 00:13:48.758 "copy": false, 00:13:48.758 "nvme_iov_md": false 00:13:48.758 }, 00:13:48.758 "memory_domains": [ 00:13:48.758 { 00:13:48.758 "dma_device_id": "system", 00:13:48.758 "dma_device_type": 1 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.758 "dma_device_type": 2 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "system", 00:13:48.758 "dma_device_type": 1 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.758 "dma_device_type": 2 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "system", 00:13:48.758 "dma_device_type": 1 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.758 "dma_device_type": 2 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "system", 00:13:48.758 "dma_device_type": 1 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.758 "dma_device_type": 2 00:13:48.758 } 00:13:48.758 ], 00:13:48.758 "driver_specific": { 00:13:48.758 "raid": { 00:13:48.758 "uuid": "4c869631-0a14-4e13-8077-136f0613d28a", 00:13:48.758 "strip_size_kb": 64, 00:13:48.758 "state": "online", 00:13:48.758 "raid_level": "raid0", 00:13:48.758 "superblock": true, 00:13:48.758 "num_base_bdevs": 4, 00:13:48.758 "num_base_bdevs_discovered": 4, 00:13:48.758 "num_base_bdevs_operational": 4, 00:13:48.758 "base_bdevs_list": [ 00:13:48.758 { 00:13:48.758 "name": "pt1", 00:13:48.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.758 "is_configured": true, 00:13:48.758 "data_offset": 2048, 00:13:48.758 "data_size": 63488 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "name": "pt2", 00:13:48.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.758 "is_configured": true, 00:13:48.758 "data_offset": 2048, 00:13:48.758 "data_size": 63488 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "name": "pt3", 00:13:48.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.758 "is_configured": true, 00:13:48.758 "data_offset": 2048, 00:13:48.758 "data_size": 63488 00:13:48.758 }, 00:13:48.758 { 00:13:48.758 "name": "pt4", 00:13:48.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.758 "is_configured": true, 00:13:48.758 "data_offset": 2048, 00:13:48.758 "data_size": 63488 00:13:48.758 } 00:13:48.758 ] 00:13:48.758 } 00:13:48.758 } 00:13:48.758 }' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:48.758 pt2 00:13:48.758 pt3 00:13:48.758 pt4' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.758 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.016 06:23:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.016 [2024-11-26 06:23:33.036584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4c869631-0a14-4e13-8077-136f0613d28a '!=' 4c869631-0a14-4e13-8077-136f0613d28a ']' 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71197 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71197 ']' 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71197 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71197 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71197' 00:13:49.016 killing process with pid 71197 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71197 00:13:49.016 [2024-11-26 06:23:33.125866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.016 [2024-11-26 06:23:33.125994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.016 06:23:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71197 00:13:49.016 [2024-11-26 06:23:33.126108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.016 [2024-11-26 06:23:33.126122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:49.582 [2024-11-26 06:23:33.612642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.960 06:23:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:50.960 00:13:50.960 real 0m6.050s 00:13:50.960 user 0m8.401s 00:13:50.960 sys 0m1.134s 00:13:50.960 ************************************ 00:13:50.960 END TEST raid_superblock_test 00:13:50.960 ************************************ 00:13:50.960 06:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.960 06:23:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 06:23:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:50.961 06:23:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:50.961 06:23:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.961 06:23:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 ************************************ 00:13:50.961 START TEST raid_read_error_test 00:13:50.961 ************************************ 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TzxZTABtnG 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71467 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71467 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71467 ']' 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.961 06:23:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.220 [2024-11-26 06:23:35.133821] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:51.220 [2024-11-26 06:23:35.134102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71467 ] 00:13:51.220 [2024-11-26 06:23:35.301956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.479 [2024-11-26 06:23:35.455718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.739 [2024-11-26 06:23:35.730785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.739 [2024-11-26 06:23:35.730828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.998 BaseBdev1_malloc 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.998 true 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.998 [2024-11-26 06:23:36.080618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:51.998 [2024-11-26 06:23:36.080748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.998 [2024-11-26 06:23:36.080798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:51.998 [2024-11-26 06:23:36.080838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.998 [2024-11-26 06:23:36.083738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.998 [2024-11-26 06:23:36.083819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.998 BaseBdev1 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.998 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.257 BaseBdev2_malloc 00:13:52.257 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 true 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 [2024-11-26 06:23:36.153890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:52.258 [2024-11-26 06:23:36.154013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.258 [2024-11-26 06:23:36.154053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:52.258 [2024-11-26 06:23:36.154106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.258 [2024-11-26 06:23:36.156802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.258 [2024-11-26 06:23:36.156889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.258 BaseBdev2 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 BaseBdev3_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 true 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 [2024-11-26 06:23:36.236272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:52.258 [2024-11-26 06:23:36.236382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.258 [2024-11-26 06:23:36.236421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:52.258 [2024-11-26 06:23:36.236477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.258 [2024-11-26 06:23:36.239062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.258 [2024-11-26 06:23:36.239152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.258 BaseBdev3 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 BaseBdev4_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 true 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 [2024-11-26 06:23:36.309720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:52.258 [2024-11-26 06:23:36.309822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.258 [2024-11-26 06:23:36.309860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:52.258 [2024-11-26 06:23:36.309910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.258 [2024-11-26 06:23:36.312769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.258 [2024-11-26 06:23:36.312857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.258 BaseBdev4 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 [2024-11-26 06:23:36.321823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.258 [2024-11-26 06:23:36.324082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.258 [2024-11-26 06:23:36.324173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.258 [2024-11-26 06:23:36.324249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.258 [2024-11-26 06:23:36.324540] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:52.258 [2024-11-26 06:23:36.324565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:52.258 [2024-11-26 06:23:36.324849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:52.258 [2024-11-26 06:23:36.325028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:52.258 [2024-11-26 06:23:36.325040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:52.258 [2024-11-26 06:23:36.325247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.258 "name": "raid_bdev1", 00:13:52.258 "uuid": "348d9a6f-20e5-4cfb-a128-cbc0c1c6ca40", 00:13:52.258 "strip_size_kb": 64, 00:13:52.258 "state": "online", 00:13:52.258 "raid_level": "raid0", 00:13:52.258 "superblock": true, 00:13:52.258 "num_base_bdevs": 4, 00:13:52.258 "num_base_bdevs_discovered": 4, 00:13:52.258 "num_base_bdevs_operational": 4, 00:13:52.258 "base_bdevs_list": [ 00:13:52.258 { 00:13:52.258 "name": "BaseBdev1", 00:13:52.258 "uuid": "54490544-98ee-5ef5-a60f-9eca6e111f0e", 00:13:52.258 "is_configured": true, 00:13:52.258 "data_offset": 2048, 00:13:52.258 "data_size": 63488 00:13:52.258 }, 00:13:52.258 { 00:13:52.258 "name": "BaseBdev2", 00:13:52.258 "uuid": "f8277fe2-83ea-5bbb-8704-dd548c275ae5", 00:13:52.258 "is_configured": true, 00:13:52.258 "data_offset": 2048, 00:13:52.258 "data_size": 63488 00:13:52.258 }, 00:13:52.258 { 00:13:52.258 "name": "BaseBdev3", 00:13:52.258 "uuid": "24931cae-6aab-5c2f-8f1f-e83f5e5194e2", 00:13:52.258 "is_configured": true, 00:13:52.258 "data_offset": 2048, 00:13:52.258 "data_size": 63488 00:13:52.258 }, 00:13:52.258 { 00:13:52.258 "name": "BaseBdev4", 00:13:52.258 "uuid": "c759d138-b0e0-56a4-acb8-31361da14f1f", 00:13:52.258 "is_configured": true, 00:13:52.258 "data_offset": 2048, 00:13:52.258 "data_size": 63488 00:13:52.258 } 00:13:52.258 ] 00:13:52.258 }' 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.258 06:23:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.826 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:52.826 06:23:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:52.826 [2024-11-26 06:23:36.874596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.762 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.762 "name": "raid_bdev1", 00:13:53.762 "uuid": "348d9a6f-20e5-4cfb-a128-cbc0c1c6ca40", 00:13:53.762 "strip_size_kb": 64, 00:13:53.762 "state": "online", 00:13:53.762 "raid_level": "raid0", 00:13:53.762 "superblock": true, 00:13:53.762 "num_base_bdevs": 4, 00:13:53.762 "num_base_bdevs_discovered": 4, 00:13:53.763 "num_base_bdevs_operational": 4, 00:13:53.763 "base_bdevs_list": [ 00:13:53.763 { 00:13:53.763 "name": "BaseBdev1", 00:13:53.763 "uuid": "54490544-98ee-5ef5-a60f-9eca6e111f0e", 00:13:53.763 "is_configured": true, 00:13:53.763 "data_offset": 2048, 00:13:53.763 "data_size": 63488 00:13:53.763 }, 00:13:53.763 { 00:13:53.763 "name": "BaseBdev2", 00:13:53.763 "uuid": "f8277fe2-83ea-5bbb-8704-dd548c275ae5", 00:13:53.763 "is_configured": true, 00:13:53.763 "data_offset": 2048, 00:13:53.763 "data_size": 63488 00:13:53.763 }, 00:13:53.763 { 00:13:53.763 "name": "BaseBdev3", 00:13:53.763 "uuid": "24931cae-6aab-5c2f-8f1f-e83f5e5194e2", 00:13:53.763 "is_configured": true, 00:13:53.763 "data_offset": 2048, 00:13:53.763 "data_size": 63488 00:13:53.763 }, 00:13:53.763 { 00:13:53.763 "name": "BaseBdev4", 00:13:53.763 "uuid": "c759d138-b0e0-56a4-acb8-31361da14f1f", 00:13:53.763 "is_configured": true, 00:13:53.763 "data_offset": 2048, 00:13:53.763 "data_size": 63488 00:13:53.763 } 00:13:53.763 ] 00:13:53.763 }' 00:13:53.763 06:23:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.763 06:23:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.334 [2024-11-26 06:23:38.300796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.334 [2024-11-26 06:23:38.300918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.334 [2024-11-26 06:23:38.304293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.334 [2024-11-26 06:23:38.304417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.334 { 00:13:54.334 "results": [ 00:13:54.334 { 00:13:54.334 "job": "raid_bdev1", 00:13:54.334 "core_mask": "0x1", 00:13:54.334 "workload": "randrw", 00:13:54.334 "percentage": 50, 00:13:54.334 "status": "finished", 00:13:54.334 "queue_depth": 1, 00:13:54.334 "io_size": 131072, 00:13:54.334 "runtime": 1.426463, 00:13:54.334 "iops": 12359.241003797504, 00:13:54.334 "mibps": 1544.905125474688, 00:13:54.334 "io_failed": 1, 00:13:54.334 "io_timeout": 0, 00:13:54.334 "avg_latency_us": 113.95206388905608, 00:13:54.334 "min_latency_us": 28.39475982532751, 00:13:54.334 "max_latency_us": 1509.6174672489083 00:13:54.334 } 00:13:54.334 ], 00:13:54.334 "core_count": 1 00:13:54.334 } 00:13:54.334 [2024-11-26 06:23:38.304554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.334 [2024-11-26 06:23:38.304576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71467 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71467 ']' 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71467 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71467 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:54.334 killing process with pid 71467 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71467' 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71467 00:13:54.334 [2024-11-26 06:23:38.336129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.334 06:23:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71467 00:13:54.593 [2024-11-26 06:23:38.716334] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TzxZTABtnG 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:55.970 ************************************ 00:13:55.970 END TEST raid_read_error_test 00:13:55.970 ************************************ 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:55.970 00:13:55.970 real 0m5.076s 00:13:55.970 user 0m5.850s 00:13:55.970 sys 0m0.756s 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.970 06:23:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.229 06:23:40 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:56.229 06:23:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:56.229 06:23:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.229 06:23:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.229 ************************************ 00:13:56.229 START TEST raid_write_error_test 00:13:56.229 ************************************ 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:56.229 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jXNJXV3s8X 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71615 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71615 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71615 ']' 00:13:56.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.230 06:23:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.230 [2024-11-26 06:23:40.291742] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:13:56.230 [2024-11-26 06:23:40.291892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:13:56.487 [2024-11-26 06:23:40.465452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.746 [2024-11-26 06:23:40.622188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.005 [2024-11-26 06:23:40.896404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.005 [2024-11-26 06:23:40.896491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.264 BaseBdev1_malloc 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.264 true 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.264 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.265 [2024-11-26 06:23:41.297527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:57.265 [2024-11-26 06:23:41.297755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.265 [2024-11-26 06:23:41.297859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:57.265 [2024-11-26 06:23:41.297923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.265 [2024-11-26 06:23:41.301003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.265 [2024-11-26 06:23:41.301155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.265 BaseBdev1 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.265 BaseBdev2_malloc 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.265 true 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.265 [2024-11-26 06:23:41.380113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:57.265 [2024-11-26 06:23:41.380291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.265 [2024-11-26 06:23:41.380373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:57.265 [2024-11-26 06:23:41.380425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.265 [2024-11-26 06:23:41.383600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.265 [2024-11-26 06:23:41.383714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:57.265 BaseBdev2 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.265 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.525 BaseBdev3_malloc 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.525 true 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.525 [2024-11-26 06:23:41.470335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:57.525 [2024-11-26 06:23:41.470543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.525 [2024-11-26 06:23:41.470636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:57.525 [2024-11-26 06:23:41.470696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.525 [2024-11-26 06:23:41.473940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.525 [2024-11-26 06:23:41.474101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:57.525 BaseBdev3 00:13:57.525 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 BaseBdev4_malloc 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 true 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 [2024-11-26 06:23:41.551567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:57.526 [2024-11-26 06:23:41.551822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.526 [2024-11-26 06:23:41.551869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:57.526 [2024-11-26 06:23:41.551887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.526 [2024-11-26 06:23:41.555000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.526 [2024-11-26 06:23:41.555116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:57.526 BaseBdev4 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 [2024-11-26 06:23:41.563672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.526 [2024-11-26 06:23:41.566296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.526 [2024-11-26 06:23:41.566510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.526 [2024-11-26 06:23:41.566669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.526 [2024-11-26 06:23:41.567056] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:57.526 [2024-11-26 06:23:41.567136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:57.526 [2024-11-26 06:23:41.567591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:57.526 [2024-11-26 06:23:41.567883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:57.526 [2024-11-26 06:23:41.567942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:57.526 [2024-11-26 06:23:41.568388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.526 "name": "raid_bdev1", 00:13:57.526 "uuid": "4fd83620-dcaa-4089-91a9-630c9bebb188", 00:13:57.526 "strip_size_kb": 64, 00:13:57.526 "state": "online", 00:13:57.526 "raid_level": "raid0", 00:13:57.526 "superblock": true, 00:13:57.526 "num_base_bdevs": 4, 00:13:57.526 "num_base_bdevs_discovered": 4, 00:13:57.526 "num_base_bdevs_operational": 4, 00:13:57.526 "base_bdevs_list": [ 00:13:57.526 { 00:13:57.526 "name": "BaseBdev1", 00:13:57.526 "uuid": "b15e3486-4090-59b5-879d-693423aedbb8", 00:13:57.526 "is_configured": true, 00:13:57.526 "data_offset": 2048, 00:13:57.526 "data_size": 63488 00:13:57.526 }, 00:13:57.526 { 00:13:57.526 "name": "BaseBdev2", 00:13:57.526 "uuid": "ab58bc2f-d744-54e4-9015-a95a6713a069", 00:13:57.526 "is_configured": true, 00:13:57.526 "data_offset": 2048, 00:13:57.526 "data_size": 63488 00:13:57.526 }, 00:13:57.526 { 00:13:57.526 "name": "BaseBdev3", 00:13:57.526 "uuid": "a5ccadcc-5e9d-5dc1-8ba9-9438d55866f8", 00:13:57.526 "is_configured": true, 00:13:57.526 "data_offset": 2048, 00:13:57.526 "data_size": 63488 00:13:57.526 }, 00:13:57.526 { 00:13:57.526 "name": "BaseBdev4", 00:13:57.526 "uuid": "53061a77-0da5-5a76-8e3a-1927cc58b22a", 00:13:57.526 "is_configured": true, 00:13:57.526 "data_offset": 2048, 00:13:57.526 "data_size": 63488 00:13:57.526 } 00:13:57.526 ] 00:13:57.526 }' 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.526 06:23:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.096 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:58.096 06:23:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:58.096 [2024-11-26 06:23:42.109393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.035 "name": "raid_bdev1", 00:13:59.035 "uuid": "4fd83620-dcaa-4089-91a9-630c9bebb188", 00:13:59.035 "strip_size_kb": 64, 00:13:59.035 "state": "online", 00:13:59.035 "raid_level": "raid0", 00:13:59.035 "superblock": true, 00:13:59.035 "num_base_bdevs": 4, 00:13:59.035 "num_base_bdevs_discovered": 4, 00:13:59.035 "num_base_bdevs_operational": 4, 00:13:59.035 "base_bdevs_list": [ 00:13:59.035 { 00:13:59.035 "name": "BaseBdev1", 00:13:59.035 "uuid": "b15e3486-4090-59b5-879d-693423aedbb8", 00:13:59.035 "is_configured": true, 00:13:59.035 "data_offset": 2048, 00:13:59.035 "data_size": 63488 00:13:59.035 }, 00:13:59.035 { 00:13:59.035 "name": "BaseBdev2", 00:13:59.035 "uuid": "ab58bc2f-d744-54e4-9015-a95a6713a069", 00:13:59.035 "is_configured": true, 00:13:59.035 "data_offset": 2048, 00:13:59.035 "data_size": 63488 00:13:59.035 }, 00:13:59.035 { 00:13:59.035 "name": "BaseBdev3", 00:13:59.035 "uuid": "a5ccadcc-5e9d-5dc1-8ba9-9438d55866f8", 00:13:59.035 "is_configured": true, 00:13:59.035 "data_offset": 2048, 00:13:59.035 "data_size": 63488 00:13:59.035 }, 00:13:59.035 { 00:13:59.035 "name": "BaseBdev4", 00:13:59.035 "uuid": "53061a77-0da5-5a76-8e3a-1927cc58b22a", 00:13:59.035 "is_configured": true, 00:13:59.035 "data_offset": 2048, 00:13:59.035 "data_size": 63488 00:13:59.035 } 00:13:59.035 ] 00:13:59.035 }' 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.035 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.603 [2024-11-26 06:23:43.452205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.603 [2024-11-26 06:23:43.452333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.603 [2024-11-26 06:23:43.455611] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.603 [2024-11-26 06:23:43.455771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.603 [2024-11-26 06:23:43.455887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.603 [2024-11-26 06:23:43.456014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:59.603 { 00:13:59.603 "results": [ 00:13:59.603 { 00:13:59.603 "job": "raid_bdev1", 00:13:59.603 "core_mask": "0x1", 00:13:59.603 "workload": "randrw", 00:13:59.603 "percentage": 50, 00:13:59.603 "status": "finished", 00:13:59.603 "queue_depth": 1, 00:13:59.603 "io_size": 131072, 00:13:59.603 "runtime": 1.342649, 00:13:59.603 "iops": 11819.17239725349, 00:13:59.603 "mibps": 1477.3965496566861, 00:13:59.603 "io_failed": 1, 00:13:59.603 "io_timeout": 0, 00:13:59.603 "avg_latency_us": 119.0321070488109, 00:13:59.603 "min_latency_us": 28.05938864628821, 00:13:59.603 "max_latency_us": 1724.2550218340612 00:13:59.603 } 00:13:59.603 ], 00:13:59.603 "core_count": 1 00:13:59.603 } 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71615 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71615 ']' 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71615 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71615 00:13:59.603 killing process with pid 71615 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71615' 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71615 00:13:59.603 [2024-11-26 06:23:43.501448] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.603 06:23:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71615 00:13:59.862 [2024-11-26 06:23:43.919011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jXNJXV3s8X 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:01.287 ************************************ 00:14:01.287 END TEST raid_write_error_test 00:14:01.287 ************************************ 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:01.287 00:14:01.287 real 0m5.143s 00:14:01.287 user 0m5.906s 00:14:01.287 sys 0m0.757s 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.287 06:23:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.287 06:23:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:01.287 06:23:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:01.287 06:23:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:01.287 06:23:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.287 06:23:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.287 ************************************ 00:14:01.287 START TEST raid_state_function_test 00:14:01.287 ************************************ 00:14:01.287 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:01.287 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:01.287 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71762 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71762' 00:14:01.288 Process raid pid: 71762 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71762 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71762 ']' 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.288 06:23:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.548 [2024-11-26 06:23:45.497453] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:01.548 [2024-11-26 06:23:45.497706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.548 [2024-11-26 06:23:45.679463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.807 [2024-11-26 06:23:45.838449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.079 [2024-11-26 06:23:46.105003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.079 [2024-11-26 06:23:46.105224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.368 [2024-11-26 06:23:46.398619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.368 [2024-11-26 06:23:46.398798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.368 [2024-11-26 06:23:46.398854] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.368 [2024-11-26 06:23:46.398923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.368 [2024-11-26 06:23:46.398964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.368 [2024-11-26 06:23:46.399025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.368 [2024-11-26 06:23:46.399087] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:02.368 [2024-11-26 06:23:46.399141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.368 "name": "Existed_Raid", 00:14:02.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.368 "strip_size_kb": 64, 00:14:02.368 "state": "configuring", 00:14:02.368 "raid_level": "concat", 00:14:02.368 "superblock": false, 00:14:02.368 "num_base_bdevs": 4, 00:14:02.368 "num_base_bdevs_discovered": 0, 00:14:02.368 "num_base_bdevs_operational": 4, 00:14:02.368 "base_bdevs_list": [ 00:14:02.368 { 00:14:02.368 "name": "BaseBdev1", 00:14:02.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.368 "is_configured": false, 00:14:02.368 "data_offset": 0, 00:14:02.368 "data_size": 0 00:14:02.368 }, 00:14:02.368 { 00:14:02.368 "name": "BaseBdev2", 00:14:02.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.368 "is_configured": false, 00:14:02.368 "data_offset": 0, 00:14:02.368 "data_size": 0 00:14:02.368 }, 00:14:02.368 { 00:14:02.368 "name": "BaseBdev3", 00:14:02.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.368 "is_configured": false, 00:14:02.368 "data_offset": 0, 00:14:02.368 "data_size": 0 00:14:02.368 }, 00:14:02.368 { 00:14:02.368 "name": "BaseBdev4", 00:14:02.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.368 "is_configured": false, 00:14:02.368 "data_offset": 0, 00:14:02.368 "data_size": 0 00:14:02.368 } 00:14:02.368 ] 00:14:02.368 }' 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.368 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.938 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:02.938 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.938 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.938 [2024-11-26 06:23:46.833810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.939 [2024-11-26 06:23:46.833942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 [2024-11-26 06:23:46.845775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:02.939 [2024-11-26 06:23:46.845897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:02.939 [2024-11-26 06:23:46.845949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:02.939 [2024-11-26 06:23:46.846015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:02.939 [2024-11-26 06:23:46.846073] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:02.939 [2024-11-26 06:23:46.846129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:02.939 [2024-11-26 06:23:46.846168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:02.939 [2024-11-26 06:23:46.846231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 [2024-11-26 06:23:46.905239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.939 BaseBdev1 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 [ 00:14:02.939 { 00:14:02.939 "name": "BaseBdev1", 00:14:02.939 "aliases": [ 00:14:02.939 "372517ea-224d-4319-8205-189a56352a89" 00:14:02.939 ], 00:14:02.939 "product_name": "Malloc disk", 00:14:02.939 "block_size": 512, 00:14:02.939 "num_blocks": 65536, 00:14:02.939 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:02.939 "assigned_rate_limits": { 00:14:02.939 "rw_ios_per_sec": 0, 00:14:02.939 "rw_mbytes_per_sec": 0, 00:14:02.939 "r_mbytes_per_sec": 0, 00:14:02.939 "w_mbytes_per_sec": 0 00:14:02.939 }, 00:14:02.939 "claimed": true, 00:14:02.939 "claim_type": "exclusive_write", 00:14:02.939 "zoned": false, 00:14:02.939 "supported_io_types": { 00:14:02.939 "read": true, 00:14:02.939 "write": true, 00:14:02.939 "unmap": true, 00:14:02.939 "flush": true, 00:14:02.939 "reset": true, 00:14:02.939 "nvme_admin": false, 00:14:02.939 "nvme_io": false, 00:14:02.939 "nvme_io_md": false, 00:14:02.939 "write_zeroes": true, 00:14:02.939 "zcopy": true, 00:14:02.939 "get_zone_info": false, 00:14:02.939 "zone_management": false, 00:14:02.939 "zone_append": false, 00:14:02.939 "compare": false, 00:14:02.939 "compare_and_write": false, 00:14:02.939 "abort": true, 00:14:02.939 "seek_hole": false, 00:14:02.939 "seek_data": false, 00:14:02.939 "copy": true, 00:14:02.939 "nvme_iov_md": false 00:14:02.939 }, 00:14:02.939 "memory_domains": [ 00:14:02.939 { 00:14:02.939 "dma_device_id": "system", 00:14:02.939 "dma_device_type": 1 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.939 "dma_device_type": 2 00:14:02.939 } 00:14:02.939 ], 00:14:02.939 "driver_specific": {} 00:14:02.939 } 00:14:02.939 ] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.939 06:23:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.939 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.939 "name": "Existed_Raid", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "strip_size_kb": 64, 00:14:02.939 "state": "configuring", 00:14:02.939 "raid_level": "concat", 00:14:02.939 "superblock": false, 00:14:02.939 "num_base_bdevs": 4, 00:14:02.939 "num_base_bdevs_discovered": 1, 00:14:02.939 "num_base_bdevs_operational": 4, 00:14:02.939 "base_bdevs_list": [ 00:14:02.939 { 00:14:02.939 "name": "BaseBdev1", 00:14:02.939 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:02.939 "is_configured": true, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 65536 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "name": "BaseBdev2", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "name": "BaseBdev3", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 }, 00:14:02.939 { 00:14:02.939 "name": "BaseBdev4", 00:14:02.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.939 "is_configured": false, 00:14:02.939 "data_offset": 0, 00:14:02.939 "data_size": 0 00:14:02.939 } 00:14:02.939 ] 00:14:02.939 }' 00:14:02.939 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.939 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 [2024-11-26 06:23:47.388559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.509 [2024-11-26 06:23:47.388739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.509 [2024-11-26 06:23:47.400646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.509 [2024-11-26 06:23:47.403187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.509 [2024-11-26 06:23:47.403342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.509 [2024-11-26 06:23:47.403399] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.509 [2024-11-26 06:23:47.403473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.509 [2024-11-26 06:23:47.403517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:03.509 [2024-11-26 06:23:47.403576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:03.509 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.510 "name": "Existed_Raid", 00:14:03.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.510 "strip_size_kb": 64, 00:14:03.510 "state": "configuring", 00:14:03.510 "raid_level": "concat", 00:14:03.510 "superblock": false, 00:14:03.510 "num_base_bdevs": 4, 00:14:03.510 "num_base_bdevs_discovered": 1, 00:14:03.510 "num_base_bdevs_operational": 4, 00:14:03.510 "base_bdevs_list": [ 00:14:03.510 { 00:14:03.510 "name": "BaseBdev1", 00:14:03.510 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:03.510 "is_configured": true, 00:14:03.510 "data_offset": 0, 00:14:03.510 "data_size": 65536 00:14:03.510 }, 00:14:03.510 { 00:14:03.510 "name": "BaseBdev2", 00:14:03.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.510 "is_configured": false, 00:14:03.510 "data_offset": 0, 00:14:03.510 "data_size": 0 00:14:03.510 }, 00:14:03.510 { 00:14:03.510 "name": "BaseBdev3", 00:14:03.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.510 "is_configured": false, 00:14:03.510 "data_offset": 0, 00:14:03.510 "data_size": 0 00:14:03.510 }, 00:14:03.510 { 00:14:03.510 "name": "BaseBdev4", 00:14:03.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.510 "is_configured": false, 00:14:03.510 "data_offset": 0, 00:14:03.510 "data_size": 0 00:14:03.510 } 00:14:03.510 ] 00:14:03.510 }' 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.510 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.770 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:03.770 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.770 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.029 [2024-11-26 06:23:47.925424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.029 BaseBdev2 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.029 [ 00:14:04.029 { 00:14:04.029 "name": "BaseBdev2", 00:14:04.029 "aliases": [ 00:14:04.029 "01162cae-9694-4e17-98b6-02518f82c97c" 00:14:04.029 ], 00:14:04.029 "product_name": "Malloc disk", 00:14:04.029 "block_size": 512, 00:14:04.029 "num_blocks": 65536, 00:14:04.029 "uuid": "01162cae-9694-4e17-98b6-02518f82c97c", 00:14:04.029 "assigned_rate_limits": { 00:14:04.029 "rw_ios_per_sec": 0, 00:14:04.029 "rw_mbytes_per_sec": 0, 00:14:04.029 "r_mbytes_per_sec": 0, 00:14:04.029 "w_mbytes_per_sec": 0 00:14:04.029 }, 00:14:04.029 "claimed": true, 00:14:04.029 "claim_type": "exclusive_write", 00:14:04.029 "zoned": false, 00:14:04.029 "supported_io_types": { 00:14:04.029 "read": true, 00:14:04.029 "write": true, 00:14:04.029 "unmap": true, 00:14:04.029 "flush": true, 00:14:04.029 "reset": true, 00:14:04.029 "nvme_admin": false, 00:14:04.029 "nvme_io": false, 00:14:04.029 "nvme_io_md": false, 00:14:04.029 "write_zeroes": true, 00:14:04.029 "zcopy": true, 00:14:04.029 "get_zone_info": false, 00:14:04.029 "zone_management": false, 00:14:04.029 "zone_append": false, 00:14:04.029 "compare": false, 00:14:04.029 "compare_and_write": false, 00:14:04.029 "abort": true, 00:14:04.029 "seek_hole": false, 00:14:04.029 "seek_data": false, 00:14:04.029 "copy": true, 00:14:04.029 "nvme_iov_md": false 00:14:04.029 }, 00:14:04.029 "memory_domains": [ 00:14:04.029 { 00:14:04.029 "dma_device_id": "system", 00:14:04.029 "dma_device_type": 1 00:14:04.029 }, 00:14:04.029 { 00:14:04.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.029 "dma_device_type": 2 00:14:04.029 } 00:14:04.029 ], 00:14:04.029 "driver_specific": {} 00:14:04.029 } 00:14:04.029 ] 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.029 06:23:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.029 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.029 "name": "Existed_Raid", 00:14:04.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.029 "strip_size_kb": 64, 00:14:04.029 "state": "configuring", 00:14:04.029 "raid_level": "concat", 00:14:04.029 "superblock": false, 00:14:04.029 "num_base_bdevs": 4, 00:14:04.029 "num_base_bdevs_discovered": 2, 00:14:04.029 "num_base_bdevs_operational": 4, 00:14:04.029 "base_bdevs_list": [ 00:14:04.029 { 00:14:04.029 "name": "BaseBdev1", 00:14:04.029 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:04.029 "is_configured": true, 00:14:04.029 "data_offset": 0, 00:14:04.029 "data_size": 65536 00:14:04.029 }, 00:14:04.029 { 00:14:04.029 "name": "BaseBdev2", 00:14:04.029 "uuid": "01162cae-9694-4e17-98b6-02518f82c97c", 00:14:04.029 "is_configured": true, 00:14:04.029 "data_offset": 0, 00:14:04.029 "data_size": 65536 00:14:04.029 }, 00:14:04.029 { 00:14:04.029 "name": "BaseBdev3", 00:14:04.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.029 "is_configured": false, 00:14:04.029 "data_offset": 0, 00:14:04.029 "data_size": 0 00:14:04.029 }, 00:14:04.029 { 00:14:04.029 "name": "BaseBdev4", 00:14:04.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.029 "is_configured": false, 00:14:04.029 "data_offset": 0, 00:14:04.029 "data_size": 0 00:14:04.029 } 00:14:04.029 ] 00:14:04.029 }' 00:14:04.029 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.029 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.598 [2024-11-26 06:23:48.528597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.598 BaseBdev3 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.598 [ 00:14:04.598 { 00:14:04.598 "name": "BaseBdev3", 00:14:04.598 "aliases": [ 00:14:04.598 "f1a1407a-7d63-4e6b-86e8-030a14c4f5af" 00:14:04.598 ], 00:14:04.598 "product_name": "Malloc disk", 00:14:04.598 "block_size": 512, 00:14:04.598 "num_blocks": 65536, 00:14:04.598 "uuid": "f1a1407a-7d63-4e6b-86e8-030a14c4f5af", 00:14:04.598 "assigned_rate_limits": { 00:14:04.598 "rw_ios_per_sec": 0, 00:14:04.598 "rw_mbytes_per_sec": 0, 00:14:04.598 "r_mbytes_per_sec": 0, 00:14:04.598 "w_mbytes_per_sec": 0 00:14:04.598 }, 00:14:04.598 "claimed": true, 00:14:04.598 "claim_type": "exclusive_write", 00:14:04.598 "zoned": false, 00:14:04.598 "supported_io_types": { 00:14:04.598 "read": true, 00:14:04.598 "write": true, 00:14:04.598 "unmap": true, 00:14:04.598 "flush": true, 00:14:04.598 "reset": true, 00:14:04.598 "nvme_admin": false, 00:14:04.598 "nvme_io": false, 00:14:04.598 "nvme_io_md": false, 00:14:04.598 "write_zeroes": true, 00:14:04.598 "zcopy": true, 00:14:04.598 "get_zone_info": false, 00:14:04.598 "zone_management": false, 00:14:04.598 "zone_append": false, 00:14:04.598 "compare": false, 00:14:04.598 "compare_and_write": false, 00:14:04.598 "abort": true, 00:14:04.598 "seek_hole": false, 00:14:04.598 "seek_data": false, 00:14:04.598 "copy": true, 00:14:04.598 "nvme_iov_md": false 00:14:04.598 }, 00:14:04.598 "memory_domains": [ 00:14:04.598 { 00:14:04.598 "dma_device_id": "system", 00:14:04.598 "dma_device_type": 1 00:14:04.598 }, 00:14:04.598 { 00:14:04.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.598 "dma_device_type": 2 00:14:04.598 } 00:14:04.598 ], 00:14:04.598 "driver_specific": {} 00:14:04.598 } 00:14:04.598 ] 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.598 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.598 "name": "Existed_Raid", 00:14:04.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.598 "strip_size_kb": 64, 00:14:04.598 "state": "configuring", 00:14:04.598 "raid_level": "concat", 00:14:04.598 "superblock": false, 00:14:04.598 "num_base_bdevs": 4, 00:14:04.598 "num_base_bdevs_discovered": 3, 00:14:04.598 "num_base_bdevs_operational": 4, 00:14:04.598 "base_bdevs_list": [ 00:14:04.598 { 00:14:04.598 "name": "BaseBdev1", 00:14:04.598 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:04.598 "is_configured": true, 00:14:04.598 "data_offset": 0, 00:14:04.598 "data_size": 65536 00:14:04.598 }, 00:14:04.598 { 00:14:04.598 "name": "BaseBdev2", 00:14:04.598 "uuid": "01162cae-9694-4e17-98b6-02518f82c97c", 00:14:04.598 "is_configured": true, 00:14:04.598 "data_offset": 0, 00:14:04.598 "data_size": 65536 00:14:04.598 }, 00:14:04.598 { 00:14:04.598 "name": "BaseBdev3", 00:14:04.598 "uuid": "f1a1407a-7d63-4e6b-86e8-030a14c4f5af", 00:14:04.598 "is_configured": true, 00:14:04.598 "data_offset": 0, 00:14:04.598 "data_size": 65536 00:14:04.598 }, 00:14:04.598 { 00:14:04.598 "name": "BaseBdev4", 00:14:04.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.598 "is_configured": false, 00:14:04.599 "data_offset": 0, 00:14:04.599 "data_size": 0 00:14:04.599 } 00:14:04.599 ] 00:14:04.599 }' 00:14:04.599 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.599 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.169 06:23:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:05.169 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.169 06:23:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.169 [2024-11-26 06:23:49.048871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.169 [2024-11-26 06:23:49.049130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.169 [2024-11-26 06:23:49.049187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:05.169 [2024-11-26 06:23:49.049618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:05.169 [2024-11-26 06:23:49.049897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.169 [2024-11-26 06:23:49.049961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev4 00:14:05.169 id_bdev 0x617000007e80 00:14:05.169 [2024-11-26 06:23:49.050396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.169 [ 00:14:05.169 { 00:14:05.169 "name": "BaseBdev4", 00:14:05.169 "aliases": [ 00:14:05.169 "b781c452-7c17-4c7e-82b6-acc85490966f" 00:14:05.169 ], 00:14:05.169 "product_name": "Malloc disk", 00:14:05.169 "block_size": 512, 00:14:05.169 "num_blocks": 65536, 00:14:05.169 "uuid": "b781c452-7c17-4c7e-82b6-acc85490966f", 00:14:05.169 "assigned_rate_limits": { 00:14:05.169 "rw_ios_per_sec": 0, 00:14:05.169 "rw_mbytes_per_sec": 0, 00:14:05.169 "r_mbytes_per_sec": 0, 00:14:05.169 "w_mbytes_per_sec": 0 00:14:05.169 }, 00:14:05.169 "claimed": true, 00:14:05.169 "claim_type": "exclusive_write", 00:14:05.169 "zoned": false, 00:14:05.169 "supported_io_types": { 00:14:05.169 "read": true, 00:14:05.169 "write": true, 00:14:05.169 "unmap": true, 00:14:05.169 "flush": true, 00:14:05.169 "reset": true, 00:14:05.169 "nvme_admin": false, 00:14:05.169 "nvme_io": false, 00:14:05.169 "nvme_io_md": false, 00:14:05.169 "write_zeroes": true, 00:14:05.169 "zcopy": true, 00:14:05.169 "get_zone_info": false, 00:14:05.169 "zone_management": false, 00:14:05.169 "zone_append": false, 00:14:05.169 "compare": false, 00:14:05.169 "compare_and_write": false, 00:14:05.169 "abort": true, 00:14:05.169 "seek_hole": false, 00:14:05.169 "seek_data": false, 00:14:05.169 "copy": true, 00:14:05.169 "nvme_iov_md": false 00:14:05.169 }, 00:14:05.169 "memory_domains": [ 00:14:05.169 { 00:14:05.169 "dma_device_id": "system", 00:14:05.169 "dma_device_type": 1 00:14:05.169 }, 00:14:05.169 { 00:14:05.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.169 "dma_device_type": 2 00:14:05.169 } 00:14:05.169 ], 00:14:05.169 "driver_specific": {} 00:14:05.169 } 00:14:05.169 ] 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.169 "name": "Existed_Raid", 00:14:05.169 "uuid": "79deb709-3e58-4a23-a0ee-6cd04bba2169", 00:14:05.169 "strip_size_kb": 64, 00:14:05.169 "state": "online", 00:14:05.169 "raid_level": "concat", 00:14:05.169 "superblock": false, 00:14:05.169 "num_base_bdevs": 4, 00:14:05.169 "num_base_bdevs_discovered": 4, 00:14:05.169 "num_base_bdevs_operational": 4, 00:14:05.169 "base_bdevs_list": [ 00:14:05.169 { 00:14:05.169 "name": "BaseBdev1", 00:14:05.169 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:05.169 "is_configured": true, 00:14:05.169 "data_offset": 0, 00:14:05.169 "data_size": 65536 00:14:05.169 }, 00:14:05.169 { 00:14:05.169 "name": "BaseBdev2", 00:14:05.169 "uuid": "01162cae-9694-4e17-98b6-02518f82c97c", 00:14:05.169 "is_configured": true, 00:14:05.169 "data_offset": 0, 00:14:05.169 "data_size": 65536 00:14:05.169 }, 00:14:05.169 { 00:14:05.169 "name": "BaseBdev3", 00:14:05.169 "uuid": "f1a1407a-7d63-4e6b-86e8-030a14c4f5af", 00:14:05.169 "is_configured": true, 00:14:05.169 "data_offset": 0, 00:14:05.169 "data_size": 65536 00:14:05.169 }, 00:14:05.169 { 00:14:05.169 "name": "BaseBdev4", 00:14:05.169 "uuid": "b781c452-7c17-4c7e-82b6-acc85490966f", 00:14:05.169 "is_configured": true, 00:14:05.169 "data_offset": 0, 00:14:05.169 "data_size": 65536 00:14:05.169 } 00:14:05.169 ] 00:14:05.169 }' 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.169 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.739 [2024-11-26 06:23:49.596631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.739 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.739 "name": "Existed_Raid", 00:14:05.739 "aliases": [ 00:14:05.739 "79deb709-3e58-4a23-a0ee-6cd04bba2169" 00:14:05.739 ], 00:14:05.739 "product_name": "Raid Volume", 00:14:05.739 "block_size": 512, 00:14:05.739 "num_blocks": 262144, 00:14:05.739 "uuid": "79deb709-3e58-4a23-a0ee-6cd04bba2169", 00:14:05.739 "assigned_rate_limits": { 00:14:05.739 "rw_ios_per_sec": 0, 00:14:05.739 "rw_mbytes_per_sec": 0, 00:14:05.739 "r_mbytes_per_sec": 0, 00:14:05.739 "w_mbytes_per_sec": 0 00:14:05.739 }, 00:14:05.739 "claimed": false, 00:14:05.739 "zoned": false, 00:14:05.739 "supported_io_types": { 00:14:05.739 "read": true, 00:14:05.739 "write": true, 00:14:05.739 "unmap": true, 00:14:05.739 "flush": true, 00:14:05.739 "reset": true, 00:14:05.740 "nvme_admin": false, 00:14:05.740 "nvme_io": false, 00:14:05.740 "nvme_io_md": false, 00:14:05.740 "write_zeroes": true, 00:14:05.740 "zcopy": false, 00:14:05.740 "get_zone_info": false, 00:14:05.740 "zone_management": false, 00:14:05.740 "zone_append": false, 00:14:05.740 "compare": false, 00:14:05.740 "compare_and_write": false, 00:14:05.740 "abort": false, 00:14:05.740 "seek_hole": false, 00:14:05.740 "seek_data": false, 00:14:05.740 "copy": false, 00:14:05.740 "nvme_iov_md": false 00:14:05.740 }, 00:14:05.740 "memory_domains": [ 00:14:05.740 { 00:14:05.740 "dma_device_id": "system", 00:14:05.740 "dma_device_type": 1 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.740 "dma_device_type": 2 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "system", 00:14:05.740 "dma_device_type": 1 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.740 "dma_device_type": 2 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "system", 00:14:05.740 "dma_device_type": 1 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.740 "dma_device_type": 2 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "system", 00:14:05.740 "dma_device_type": 1 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.740 "dma_device_type": 2 00:14:05.740 } 00:14:05.740 ], 00:14:05.740 "driver_specific": { 00:14:05.740 "raid": { 00:14:05.740 "uuid": "79deb709-3e58-4a23-a0ee-6cd04bba2169", 00:14:05.740 "strip_size_kb": 64, 00:14:05.740 "state": "online", 00:14:05.740 "raid_level": "concat", 00:14:05.740 "superblock": false, 00:14:05.740 "num_base_bdevs": 4, 00:14:05.740 "num_base_bdevs_discovered": 4, 00:14:05.740 "num_base_bdevs_operational": 4, 00:14:05.740 "base_bdevs_list": [ 00:14:05.740 { 00:14:05.740 "name": "BaseBdev1", 00:14:05.740 "uuid": "372517ea-224d-4319-8205-189a56352a89", 00:14:05.740 "is_configured": true, 00:14:05.740 "data_offset": 0, 00:14:05.740 "data_size": 65536 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "name": "BaseBdev2", 00:14:05.740 "uuid": "01162cae-9694-4e17-98b6-02518f82c97c", 00:14:05.740 "is_configured": true, 00:14:05.740 "data_offset": 0, 00:14:05.740 "data_size": 65536 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "name": "BaseBdev3", 00:14:05.740 "uuid": "f1a1407a-7d63-4e6b-86e8-030a14c4f5af", 00:14:05.740 "is_configured": true, 00:14:05.740 "data_offset": 0, 00:14:05.740 "data_size": 65536 00:14:05.740 }, 00:14:05.740 { 00:14:05.740 "name": "BaseBdev4", 00:14:05.740 "uuid": "b781c452-7c17-4c7e-82b6-acc85490966f", 00:14:05.740 "is_configured": true, 00:14:05.740 "data_offset": 0, 00:14:05.740 "data_size": 65536 00:14:05.740 } 00:14:05.740 ] 00:14:05.740 } 00:14:05.740 } 00:14:05.740 }' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:05.740 BaseBdev2 00:14:05.740 BaseBdev3 00:14:05.740 BaseBdev4' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.740 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.999 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.999 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.999 06:23:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:05.999 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.999 06:23:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.999 [2024-11-26 06:23:49.891886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.999 [2024-11-26 06:23:49.891928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.999 [2024-11-26 06:23:49.891993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:05.999 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.000 "name": "Existed_Raid", 00:14:06.000 "uuid": "79deb709-3e58-4a23-a0ee-6cd04bba2169", 00:14:06.000 "strip_size_kb": 64, 00:14:06.000 "state": "offline", 00:14:06.000 "raid_level": "concat", 00:14:06.000 "superblock": false, 00:14:06.000 "num_base_bdevs": 4, 00:14:06.000 "num_base_bdevs_discovered": 3, 00:14:06.000 "num_base_bdevs_operational": 3, 00:14:06.000 "base_bdevs_list": [ 00:14:06.000 { 00:14:06.000 "name": null, 00:14:06.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.000 "is_configured": false, 00:14:06.000 "data_offset": 0, 00:14:06.000 "data_size": 65536 00:14:06.000 }, 00:14:06.000 { 00:14:06.000 "name": "BaseBdev2", 00:14:06.000 "uuid": "01162cae-9694-4e17-98b6-02518f82c97c", 00:14:06.000 "is_configured": true, 00:14:06.000 "data_offset": 0, 00:14:06.000 "data_size": 65536 00:14:06.000 }, 00:14:06.000 { 00:14:06.000 "name": "BaseBdev3", 00:14:06.000 "uuid": "f1a1407a-7d63-4e6b-86e8-030a14c4f5af", 00:14:06.000 "is_configured": true, 00:14:06.000 "data_offset": 0, 00:14:06.000 "data_size": 65536 00:14:06.000 }, 00:14:06.000 { 00:14:06.000 "name": "BaseBdev4", 00:14:06.000 "uuid": "b781c452-7c17-4c7e-82b6-acc85490966f", 00:14:06.000 "is_configured": true, 00:14:06.000 "data_offset": 0, 00:14:06.000 "data_size": 65536 00:14:06.000 } 00:14:06.000 ] 00:14:06.000 }' 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.000 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.570 [2024-11-26 06:23:50.521013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.570 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.570 [2024-11-26 06:23:50.693613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.829 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.829 [2024-11-26 06:23:50.867609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:06.829 [2024-11-26 06:23:50.867745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 06:23:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 BaseBdev2 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 [ 00:14:07.089 { 00:14:07.089 "name": "BaseBdev2", 00:14:07.089 "aliases": [ 00:14:07.089 "d032ec74-44e2-4b39-89e9-406a2ff3f89b" 00:14:07.089 ], 00:14:07.089 "product_name": "Malloc disk", 00:14:07.089 "block_size": 512, 00:14:07.089 "num_blocks": 65536, 00:14:07.089 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:07.089 "assigned_rate_limits": { 00:14:07.089 "rw_ios_per_sec": 0, 00:14:07.089 "rw_mbytes_per_sec": 0, 00:14:07.089 "r_mbytes_per_sec": 0, 00:14:07.089 "w_mbytes_per_sec": 0 00:14:07.089 }, 00:14:07.089 "claimed": false, 00:14:07.089 "zoned": false, 00:14:07.089 "supported_io_types": { 00:14:07.089 "read": true, 00:14:07.089 "write": true, 00:14:07.089 "unmap": true, 00:14:07.089 "flush": true, 00:14:07.089 "reset": true, 00:14:07.089 "nvme_admin": false, 00:14:07.089 "nvme_io": false, 00:14:07.089 "nvme_io_md": false, 00:14:07.089 "write_zeroes": true, 00:14:07.089 "zcopy": true, 00:14:07.089 "get_zone_info": false, 00:14:07.089 "zone_management": false, 00:14:07.089 "zone_append": false, 00:14:07.089 "compare": false, 00:14:07.089 "compare_and_write": false, 00:14:07.089 "abort": true, 00:14:07.089 "seek_hole": false, 00:14:07.089 "seek_data": false, 00:14:07.089 "copy": true, 00:14:07.089 "nvme_iov_md": false 00:14:07.089 }, 00:14:07.089 "memory_domains": [ 00:14:07.089 { 00:14:07.089 "dma_device_id": "system", 00:14:07.089 "dma_device_type": 1 00:14:07.089 }, 00:14:07.089 { 00:14:07.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.089 "dma_device_type": 2 00:14:07.089 } 00:14:07.089 ], 00:14:07.089 "driver_specific": {} 00:14:07.089 } 00:14:07.089 ] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 BaseBdev3 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.089 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 [ 00:14:07.089 { 00:14:07.089 "name": "BaseBdev3", 00:14:07.089 "aliases": [ 00:14:07.089 "be3a5c43-0452-460f-ba31-0f11a165ab87" 00:14:07.089 ], 00:14:07.089 "product_name": "Malloc disk", 00:14:07.089 "block_size": 512, 00:14:07.089 "num_blocks": 65536, 00:14:07.089 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:07.089 "assigned_rate_limits": { 00:14:07.089 "rw_ios_per_sec": 0, 00:14:07.089 "rw_mbytes_per_sec": 0, 00:14:07.090 "r_mbytes_per_sec": 0, 00:14:07.090 "w_mbytes_per_sec": 0 00:14:07.090 }, 00:14:07.090 "claimed": false, 00:14:07.090 "zoned": false, 00:14:07.090 "supported_io_types": { 00:14:07.090 "read": true, 00:14:07.090 "write": true, 00:14:07.090 "unmap": true, 00:14:07.090 "flush": true, 00:14:07.090 "reset": true, 00:14:07.090 "nvme_admin": false, 00:14:07.090 "nvme_io": false, 00:14:07.090 "nvme_io_md": false, 00:14:07.090 "write_zeroes": true, 00:14:07.090 "zcopy": true, 00:14:07.090 "get_zone_info": false, 00:14:07.090 "zone_management": false, 00:14:07.090 "zone_append": false, 00:14:07.090 "compare": false, 00:14:07.090 "compare_and_write": false, 00:14:07.090 "abort": true, 00:14:07.090 "seek_hole": false, 00:14:07.090 "seek_data": false, 00:14:07.090 "copy": true, 00:14:07.090 "nvme_iov_md": false 00:14:07.090 }, 00:14:07.090 "memory_domains": [ 00:14:07.090 { 00:14:07.090 "dma_device_id": "system", 00:14:07.090 "dma_device_type": 1 00:14:07.090 }, 00:14:07.090 { 00:14:07.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.090 "dma_device_type": 2 00:14:07.090 } 00:14:07.090 ], 00:14:07.090 "driver_specific": {} 00:14:07.090 } 00:14:07.090 ] 00:14:07.090 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.090 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:07.090 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.090 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.090 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:07.090 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.349 BaseBdev4 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.349 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.349 [ 00:14:07.349 { 00:14:07.349 "name": "BaseBdev4", 00:14:07.349 "aliases": [ 00:14:07.349 "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e" 00:14:07.349 ], 00:14:07.349 "product_name": "Malloc disk", 00:14:07.349 "block_size": 512, 00:14:07.349 "num_blocks": 65536, 00:14:07.349 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:07.349 "assigned_rate_limits": { 00:14:07.349 "rw_ios_per_sec": 0, 00:14:07.349 "rw_mbytes_per_sec": 0, 00:14:07.349 "r_mbytes_per_sec": 0, 00:14:07.349 "w_mbytes_per_sec": 0 00:14:07.349 }, 00:14:07.349 "claimed": false, 00:14:07.349 "zoned": false, 00:14:07.349 "supported_io_types": { 00:14:07.349 "read": true, 00:14:07.349 "write": true, 00:14:07.349 "unmap": true, 00:14:07.349 "flush": true, 00:14:07.349 "reset": true, 00:14:07.349 "nvme_admin": false, 00:14:07.349 "nvme_io": false, 00:14:07.349 "nvme_io_md": false, 00:14:07.349 "write_zeroes": true, 00:14:07.349 "zcopy": true, 00:14:07.349 "get_zone_info": false, 00:14:07.349 "zone_management": false, 00:14:07.349 "zone_append": false, 00:14:07.349 "compare": false, 00:14:07.349 "compare_and_write": false, 00:14:07.349 "abort": true, 00:14:07.349 "seek_hole": false, 00:14:07.349 "seek_data": false, 00:14:07.349 "copy": true, 00:14:07.349 "nvme_iov_md": false 00:14:07.349 }, 00:14:07.349 "memory_domains": [ 00:14:07.349 { 00:14:07.349 "dma_device_id": "system", 00:14:07.350 "dma_device_type": 1 00:14:07.350 }, 00:14:07.350 { 00:14:07.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.350 "dma_device_type": 2 00:14:07.350 } 00:14:07.350 ], 00:14:07.350 "driver_specific": {} 00:14:07.350 } 00:14:07.350 ] 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 [2024-11-26 06:23:51.313451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.350 [2024-11-26 06:23:51.313577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.350 [2024-11-26 06:23:51.313672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.350 [2024-11-26 06:23:51.316152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.350 [2024-11-26 06:23:51.316291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.350 "name": "Existed_Raid", 00:14:07.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.350 "strip_size_kb": 64, 00:14:07.350 "state": "configuring", 00:14:07.350 "raid_level": "concat", 00:14:07.350 "superblock": false, 00:14:07.350 "num_base_bdevs": 4, 00:14:07.350 "num_base_bdevs_discovered": 3, 00:14:07.350 "num_base_bdevs_operational": 4, 00:14:07.350 "base_bdevs_list": [ 00:14:07.350 { 00:14:07.350 "name": "BaseBdev1", 00:14:07.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.350 "is_configured": false, 00:14:07.350 "data_offset": 0, 00:14:07.350 "data_size": 0 00:14:07.350 }, 00:14:07.350 { 00:14:07.350 "name": "BaseBdev2", 00:14:07.350 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:07.350 "is_configured": true, 00:14:07.350 "data_offset": 0, 00:14:07.350 "data_size": 65536 00:14:07.350 }, 00:14:07.350 { 00:14:07.350 "name": "BaseBdev3", 00:14:07.350 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:07.350 "is_configured": true, 00:14:07.350 "data_offset": 0, 00:14:07.350 "data_size": 65536 00:14:07.350 }, 00:14:07.350 { 00:14:07.350 "name": "BaseBdev4", 00:14:07.350 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:07.350 "is_configured": true, 00:14:07.350 "data_offset": 0, 00:14:07.350 "data_size": 65536 00:14:07.350 } 00:14:07.350 ] 00:14:07.350 }' 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.350 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.947 [2024-11-26 06:23:51.788697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.947 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.947 "name": "Existed_Raid", 00:14:07.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.947 "strip_size_kb": 64, 00:14:07.947 "state": "configuring", 00:14:07.947 "raid_level": "concat", 00:14:07.947 "superblock": false, 00:14:07.947 "num_base_bdevs": 4, 00:14:07.947 "num_base_bdevs_discovered": 2, 00:14:07.947 "num_base_bdevs_operational": 4, 00:14:07.947 "base_bdevs_list": [ 00:14:07.947 { 00:14:07.947 "name": "BaseBdev1", 00:14:07.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.947 "is_configured": false, 00:14:07.947 "data_offset": 0, 00:14:07.947 "data_size": 0 00:14:07.947 }, 00:14:07.947 { 00:14:07.947 "name": null, 00:14:07.947 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:07.947 "is_configured": false, 00:14:07.947 "data_offset": 0, 00:14:07.947 "data_size": 65536 00:14:07.947 }, 00:14:07.947 { 00:14:07.947 "name": "BaseBdev3", 00:14:07.947 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:07.947 "is_configured": true, 00:14:07.948 "data_offset": 0, 00:14:07.948 "data_size": 65536 00:14:07.948 }, 00:14:07.948 { 00:14:07.948 "name": "BaseBdev4", 00:14:07.948 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:07.948 "is_configured": true, 00:14:07.948 "data_offset": 0, 00:14:07.948 "data_size": 65536 00:14:07.948 } 00:14:07.948 ] 00:14:07.948 }' 00:14:07.948 06:23:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.948 06:23:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.207 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.207 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.207 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.207 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.207 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.466 [2024-11-26 06:23:52.409376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.466 BaseBdev1 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.466 [ 00:14:08.466 { 00:14:08.466 "name": "BaseBdev1", 00:14:08.466 "aliases": [ 00:14:08.466 "afc32300-d23b-4ea7-a42b-252706c8307b" 00:14:08.466 ], 00:14:08.466 "product_name": "Malloc disk", 00:14:08.466 "block_size": 512, 00:14:08.466 "num_blocks": 65536, 00:14:08.466 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:08.466 "assigned_rate_limits": { 00:14:08.466 "rw_ios_per_sec": 0, 00:14:08.466 "rw_mbytes_per_sec": 0, 00:14:08.466 "r_mbytes_per_sec": 0, 00:14:08.466 "w_mbytes_per_sec": 0 00:14:08.466 }, 00:14:08.466 "claimed": true, 00:14:08.466 "claim_type": "exclusive_write", 00:14:08.466 "zoned": false, 00:14:08.466 "supported_io_types": { 00:14:08.466 "read": true, 00:14:08.466 "write": true, 00:14:08.466 "unmap": true, 00:14:08.466 "flush": true, 00:14:08.466 "reset": true, 00:14:08.466 "nvme_admin": false, 00:14:08.466 "nvme_io": false, 00:14:08.466 "nvme_io_md": false, 00:14:08.466 "write_zeroes": true, 00:14:08.466 "zcopy": true, 00:14:08.466 "get_zone_info": false, 00:14:08.466 "zone_management": false, 00:14:08.466 "zone_append": false, 00:14:08.466 "compare": false, 00:14:08.466 "compare_and_write": false, 00:14:08.466 "abort": true, 00:14:08.466 "seek_hole": false, 00:14:08.466 "seek_data": false, 00:14:08.466 "copy": true, 00:14:08.466 "nvme_iov_md": false 00:14:08.466 }, 00:14:08.466 "memory_domains": [ 00:14:08.466 { 00:14:08.466 "dma_device_id": "system", 00:14:08.466 "dma_device_type": 1 00:14:08.466 }, 00:14:08.466 { 00:14:08.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.466 "dma_device_type": 2 00:14:08.466 } 00:14:08.466 ], 00:14:08.466 "driver_specific": {} 00:14:08.466 } 00:14:08.466 ] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.466 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.466 "name": "Existed_Raid", 00:14:08.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.466 "strip_size_kb": 64, 00:14:08.466 "state": "configuring", 00:14:08.466 "raid_level": "concat", 00:14:08.466 "superblock": false, 00:14:08.466 "num_base_bdevs": 4, 00:14:08.466 "num_base_bdevs_discovered": 3, 00:14:08.466 "num_base_bdevs_operational": 4, 00:14:08.466 "base_bdevs_list": [ 00:14:08.466 { 00:14:08.466 "name": "BaseBdev1", 00:14:08.467 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:08.467 "is_configured": true, 00:14:08.467 "data_offset": 0, 00:14:08.467 "data_size": 65536 00:14:08.467 }, 00:14:08.467 { 00:14:08.467 "name": null, 00:14:08.467 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:08.467 "is_configured": false, 00:14:08.467 "data_offset": 0, 00:14:08.467 "data_size": 65536 00:14:08.467 }, 00:14:08.467 { 00:14:08.467 "name": "BaseBdev3", 00:14:08.467 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:08.467 "is_configured": true, 00:14:08.467 "data_offset": 0, 00:14:08.467 "data_size": 65536 00:14:08.467 }, 00:14:08.467 { 00:14:08.467 "name": "BaseBdev4", 00:14:08.467 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:08.467 "is_configured": true, 00:14:08.467 "data_offset": 0, 00:14:08.467 "data_size": 65536 00:14:08.467 } 00:14:08.467 ] 00:14:08.467 }' 00:14:08.467 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.467 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.035 [2024-11-26 06:23:52.948623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.035 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.036 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.036 06:23:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.036 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.036 "name": "Existed_Raid", 00:14:09.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.036 "strip_size_kb": 64, 00:14:09.036 "state": "configuring", 00:14:09.036 "raid_level": "concat", 00:14:09.036 "superblock": false, 00:14:09.036 "num_base_bdevs": 4, 00:14:09.036 "num_base_bdevs_discovered": 2, 00:14:09.036 "num_base_bdevs_operational": 4, 00:14:09.036 "base_bdevs_list": [ 00:14:09.036 { 00:14:09.036 "name": "BaseBdev1", 00:14:09.036 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:09.036 "is_configured": true, 00:14:09.036 "data_offset": 0, 00:14:09.036 "data_size": 65536 00:14:09.036 }, 00:14:09.036 { 00:14:09.036 "name": null, 00:14:09.036 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:09.036 "is_configured": false, 00:14:09.036 "data_offset": 0, 00:14:09.036 "data_size": 65536 00:14:09.036 }, 00:14:09.036 { 00:14:09.036 "name": null, 00:14:09.036 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:09.036 "is_configured": false, 00:14:09.036 "data_offset": 0, 00:14:09.036 "data_size": 65536 00:14:09.036 }, 00:14:09.036 { 00:14:09.036 "name": "BaseBdev4", 00:14:09.036 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:09.036 "is_configured": true, 00:14:09.036 "data_offset": 0, 00:14:09.036 "data_size": 65536 00:14:09.036 } 00:14:09.036 ] 00:14:09.036 }' 00:14:09.036 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.036 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.604 [2024-11-26 06:23:53.496310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.604 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.605 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.605 "name": "Existed_Raid", 00:14:09.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.605 "strip_size_kb": 64, 00:14:09.605 "state": "configuring", 00:14:09.605 "raid_level": "concat", 00:14:09.605 "superblock": false, 00:14:09.605 "num_base_bdevs": 4, 00:14:09.605 "num_base_bdevs_discovered": 3, 00:14:09.605 "num_base_bdevs_operational": 4, 00:14:09.605 "base_bdevs_list": [ 00:14:09.605 { 00:14:09.605 "name": "BaseBdev1", 00:14:09.605 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:09.605 "is_configured": true, 00:14:09.605 "data_offset": 0, 00:14:09.605 "data_size": 65536 00:14:09.605 }, 00:14:09.605 { 00:14:09.605 "name": null, 00:14:09.605 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:09.605 "is_configured": false, 00:14:09.605 "data_offset": 0, 00:14:09.605 "data_size": 65536 00:14:09.605 }, 00:14:09.605 { 00:14:09.605 "name": "BaseBdev3", 00:14:09.605 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:09.605 "is_configured": true, 00:14:09.605 "data_offset": 0, 00:14:09.605 "data_size": 65536 00:14:09.605 }, 00:14:09.605 { 00:14:09.605 "name": "BaseBdev4", 00:14:09.605 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:09.605 "is_configured": true, 00:14:09.605 "data_offset": 0, 00:14:09.605 "data_size": 65536 00:14:09.605 } 00:14:09.605 ] 00:14:09.605 }' 00:14:09.605 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.605 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:10.174 06:23:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.174 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.174 06:23:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 [2024-11-26 06:23:54.036218] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.174 "name": "Existed_Raid", 00:14:10.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.174 "strip_size_kb": 64, 00:14:10.174 "state": "configuring", 00:14:10.174 "raid_level": "concat", 00:14:10.174 "superblock": false, 00:14:10.174 "num_base_bdevs": 4, 00:14:10.174 "num_base_bdevs_discovered": 2, 00:14:10.174 "num_base_bdevs_operational": 4, 00:14:10.174 "base_bdevs_list": [ 00:14:10.174 { 00:14:10.174 "name": null, 00:14:10.174 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:10.174 "is_configured": false, 00:14:10.174 "data_offset": 0, 00:14:10.174 "data_size": 65536 00:14:10.174 }, 00:14:10.174 { 00:14:10.174 "name": null, 00:14:10.174 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:10.174 "is_configured": false, 00:14:10.174 "data_offset": 0, 00:14:10.174 "data_size": 65536 00:14:10.174 }, 00:14:10.174 { 00:14:10.174 "name": "BaseBdev3", 00:14:10.174 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:10.174 "is_configured": true, 00:14:10.174 "data_offset": 0, 00:14:10.174 "data_size": 65536 00:14:10.174 }, 00:14:10.174 { 00:14:10.174 "name": "BaseBdev4", 00:14:10.174 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:10.174 "is_configured": true, 00:14:10.174 "data_offset": 0, 00:14:10.174 "data_size": 65536 00:14:10.174 } 00:14:10.174 ] 00:14:10.174 }' 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.174 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.742 [2024-11-26 06:23:54.672216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.742 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.742 "name": "Existed_Raid", 00:14:10.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.742 "strip_size_kb": 64, 00:14:10.742 "state": "configuring", 00:14:10.742 "raid_level": "concat", 00:14:10.742 "superblock": false, 00:14:10.742 "num_base_bdevs": 4, 00:14:10.742 "num_base_bdevs_discovered": 3, 00:14:10.742 "num_base_bdevs_operational": 4, 00:14:10.742 "base_bdevs_list": [ 00:14:10.742 { 00:14:10.742 "name": null, 00:14:10.742 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:10.742 "is_configured": false, 00:14:10.742 "data_offset": 0, 00:14:10.742 "data_size": 65536 00:14:10.742 }, 00:14:10.742 { 00:14:10.742 "name": "BaseBdev2", 00:14:10.742 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:10.742 "is_configured": true, 00:14:10.742 "data_offset": 0, 00:14:10.742 "data_size": 65536 00:14:10.742 }, 00:14:10.742 { 00:14:10.742 "name": "BaseBdev3", 00:14:10.742 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:10.742 "is_configured": true, 00:14:10.742 "data_offset": 0, 00:14:10.742 "data_size": 65536 00:14:10.742 }, 00:14:10.743 { 00:14:10.743 "name": "BaseBdev4", 00:14:10.743 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:10.743 "is_configured": true, 00:14:10.743 "data_offset": 0, 00:14:10.743 "data_size": 65536 00:14:10.743 } 00:14:10.743 ] 00:14:10.743 }' 00:14:10.743 06:23:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.743 06:23:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u afc32300-d23b-4ea7-a42b-252706c8307b 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 [2024-11-26 06:23:55.275444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:11.311 [2024-11-26 06:23:55.275623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:11.311 [2024-11-26 06:23:55.275654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:11.311 [2024-11-26 06:23:55.276045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:11.311 [2024-11-26 06:23:55.276328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:11.311 [2024-11-26 06:23:55.276385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:11.311 [2024-11-26 06:23:55.276776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.311 NewBaseBdev 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.311 [ 00:14:11.311 { 00:14:11.311 "name": "NewBaseBdev", 00:14:11.311 "aliases": [ 00:14:11.311 "afc32300-d23b-4ea7-a42b-252706c8307b" 00:14:11.311 ], 00:14:11.311 "product_name": "Malloc disk", 00:14:11.311 "block_size": 512, 00:14:11.311 "num_blocks": 65536, 00:14:11.311 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:11.311 "assigned_rate_limits": { 00:14:11.311 "rw_ios_per_sec": 0, 00:14:11.311 "rw_mbytes_per_sec": 0, 00:14:11.311 "r_mbytes_per_sec": 0, 00:14:11.311 "w_mbytes_per_sec": 0 00:14:11.311 }, 00:14:11.311 "claimed": true, 00:14:11.311 "claim_type": "exclusive_write", 00:14:11.311 "zoned": false, 00:14:11.311 "supported_io_types": { 00:14:11.311 "read": true, 00:14:11.311 "write": true, 00:14:11.311 "unmap": true, 00:14:11.311 "flush": true, 00:14:11.311 "reset": true, 00:14:11.311 "nvme_admin": false, 00:14:11.311 "nvme_io": false, 00:14:11.311 "nvme_io_md": false, 00:14:11.311 "write_zeroes": true, 00:14:11.311 "zcopy": true, 00:14:11.311 "get_zone_info": false, 00:14:11.311 "zone_management": false, 00:14:11.311 "zone_append": false, 00:14:11.311 "compare": false, 00:14:11.311 "compare_and_write": false, 00:14:11.311 "abort": true, 00:14:11.311 "seek_hole": false, 00:14:11.311 "seek_data": false, 00:14:11.311 "copy": true, 00:14:11.311 "nvme_iov_md": false 00:14:11.311 }, 00:14:11.311 "memory_domains": [ 00:14:11.311 { 00:14:11.311 "dma_device_id": "system", 00:14:11.311 "dma_device_type": 1 00:14:11.311 }, 00:14:11.311 { 00:14:11.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.311 "dma_device_type": 2 00:14:11.311 } 00:14:11.311 ], 00:14:11.311 "driver_specific": {} 00:14:11.311 } 00:14:11.311 ] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.311 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.312 "name": "Existed_Raid", 00:14:11.312 "uuid": "430de92c-1346-4bc3-b2ad-3ac6f1df823b", 00:14:11.312 "strip_size_kb": 64, 00:14:11.312 "state": "online", 00:14:11.312 "raid_level": "concat", 00:14:11.312 "superblock": false, 00:14:11.312 "num_base_bdevs": 4, 00:14:11.312 "num_base_bdevs_discovered": 4, 00:14:11.312 "num_base_bdevs_operational": 4, 00:14:11.312 "base_bdevs_list": [ 00:14:11.312 { 00:14:11.312 "name": "NewBaseBdev", 00:14:11.312 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:11.312 "is_configured": true, 00:14:11.312 "data_offset": 0, 00:14:11.312 "data_size": 65536 00:14:11.312 }, 00:14:11.312 { 00:14:11.312 "name": "BaseBdev2", 00:14:11.312 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:11.312 "is_configured": true, 00:14:11.312 "data_offset": 0, 00:14:11.312 "data_size": 65536 00:14:11.312 }, 00:14:11.312 { 00:14:11.312 "name": "BaseBdev3", 00:14:11.312 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:11.312 "is_configured": true, 00:14:11.312 "data_offset": 0, 00:14:11.312 "data_size": 65536 00:14:11.312 }, 00:14:11.312 { 00:14:11.312 "name": "BaseBdev4", 00:14:11.312 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:11.312 "is_configured": true, 00:14:11.312 "data_offset": 0, 00:14:11.312 "data_size": 65536 00:14:11.312 } 00:14:11.312 ] 00:14:11.312 }' 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.312 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.880 [2024-11-26 06:23:55.755189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.880 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.880 "name": "Existed_Raid", 00:14:11.880 "aliases": [ 00:14:11.880 "430de92c-1346-4bc3-b2ad-3ac6f1df823b" 00:14:11.880 ], 00:14:11.880 "product_name": "Raid Volume", 00:14:11.880 "block_size": 512, 00:14:11.880 "num_blocks": 262144, 00:14:11.880 "uuid": "430de92c-1346-4bc3-b2ad-3ac6f1df823b", 00:14:11.880 "assigned_rate_limits": { 00:14:11.880 "rw_ios_per_sec": 0, 00:14:11.880 "rw_mbytes_per_sec": 0, 00:14:11.880 "r_mbytes_per_sec": 0, 00:14:11.880 "w_mbytes_per_sec": 0 00:14:11.880 }, 00:14:11.880 "claimed": false, 00:14:11.880 "zoned": false, 00:14:11.880 "supported_io_types": { 00:14:11.880 "read": true, 00:14:11.880 "write": true, 00:14:11.880 "unmap": true, 00:14:11.880 "flush": true, 00:14:11.880 "reset": true, 00:14:11.880 "nvme_admin": false, 00:14:11.880 "nvme_io": false, 00:14:11.880 "nvme_io_md": false, 00:14:11.880 "write_zeroes": true, 00:14:11.880 "zcopy": false, 00:14:11.880 "get_zone_info": false, 00:14:11.881 "zone_management": false, 00:14:11.881 "zone_append": false, 00:14:11.881 "compare": false, 00:14:11.881 "compare_and_write": false, 00:14:11.881 "abort": false, 00:14:11.881 "seek_hole": false, 00:14:11.881 "seek_data": false, 00:14:11.881 "copy": false, 00:14:11.881 "nvme_iov_md": false 00:14:11.881 }, 00:14:11.881 "memory_domains": [ 00:14:11.881 { 00:14:11.881 "dma_device_id": "system", 00:14:11.881 "dma_device_type": 1 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.881 "dma_device_type": 2 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "system", 00:14:11.881 "dma_device_type": 1 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.881 "dma_device_type": 2 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "system", 00:14:11.881 "dma_device_type": 1 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.881 "dma_device_type": 2 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "system", 00:14:11.881 "dma_device_type": 1 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.881 "dma_device_type": 2 00:14:11.881 } 00:14:11.881 ], 00:14:11.881 "driver_specific": { 00:14:11.881 "raid": { 00:14:11.881 "uuid": "430de92c-1346-4bc3-b2ad-3ac6f1df823b", 00:14:11.881 "strip_size_kb": 64, 00:14:11.881 "state": "online", 00:14:11.881 "raid_level": "concat", 00:14:11.881 "superblock": false, 00:14:11.881 "num_base_bdevs": 4, 00:14:11.881 "num_base_bdevs_discovered": 4, 00:14:11.881 "num_base_bdevs_operational": 4, 00:14:11.881 "base_bdevs_list": [ 00:14:11.881 { 00:14:11.881 "name": "NewBaseBdev", 00:14:11.881 "uuid": "afc32300-d23b-4ea7-a42b-252706c8307b", 00:14:11.881 "is_configured": true, 00:14:11.881 "data_offset": 0, 00:14:11.881 "data_size": 65536 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "name": "BaseBdev2", 00:14:11.881 "uuid": "d032ec74-44e2-4b39-89e9-406a2ff3f89b", 00:14:11.881 "is_configured": true, 00:14:11.881 "data_offset": 0, 00:14:11.881 "data_size": 65536 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "name": "BaseBdev3", 00:14:11.881 "uuid": "be3a5c43-0452-460f-ba31-0f11a165ab87", 00:14:11.881 "is_configured": true, 00:14:11.881 "data_offset": 0, 00:14:11.881 "data_size": 65536 00:14:11.881 }, 00:14:11.881 { 00:14:11.881 "name": "BaseBdev4", 00:14:11.881 "uuid": "d0d8a179-9d5b-4eca-ace9-f3b8b9499b5e", 00:14:11.881 "is_configured": true, 00:14:11.881 "data_offset": 0, 00:14:11.881 "data_size": 65536 00:14:11.881 } 00:14:11.881 ] 00:14:11.881 } 00:14:11.881 } 00:14:11.881 }' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:11.881 BaseBdev2 00:14:11.881 BaseBdev3 00:14:11.881 BaseBdev4' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.881 06:23:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.141 [2024-11-26 06:23:56.082202] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.141 [2024-11-26 06:23:56.082333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.141 [2024-11-26 06:23:56.082466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.141 [2024-11-26 06:23:56.082595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.141 [2024-11-26 06:23:56.082676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71762 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71762 ']' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71762 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71762 00:14:12.141 killing process with pid 71762 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71762' 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71762 00:14:12.141 [2024-11-26 06:23:56.134272] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.141 06:23:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71762 00:14:12.706 [2024-11-26 06:23:56.604818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:14.087 00:14:14.087 real 0m12.538s 00:14:14.087 user 0m19.447s 00:14:14.087 sys 0m2.401s 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.087 ************************************ 00:14:14.087 END TEST raid_state_function_test 00:14:14.087 ************************************ 00:14:14.087 06:23:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:14.087 06:23:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:14.087 06:23:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.087 06:23:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.087 ************************************ 00:14:14.087 START TEST raid_state_function_test_sb 00:14:14.087 ************************************ 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:14.087 06:23:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:14.087 Process raid pid: 72443 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72443 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72443' 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72443 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72443 ']' 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.087 06:23:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.087 [2024-11-26 06:23:58.111470] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:14.087 [2024-11-26 06:23:58.112601] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.347 [2024-11-26 06:23:58.324214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.347 [2024-11-26 06:23:58.460298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.607 [2024-11-26 06:23:58.699378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.607 [2024-11-26 06:23:58.699520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.177 [2024-11-26 06:23:59.055014] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.177 [2024-11-26 06:23:59.055248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.177 [2024-11-26 06:23:59.055292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.177 [2024-11-26 06:23:59.055354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.177 [2024-11-26 06:23:59.055408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.177 [2024-11-26 06:23:59.055469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.177 [2024-11-26 06:23:59.055503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:15.177 [2024-11-26 06:23:59.055546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.177 "name": "Existed_Raid", 00:14:15.177 "uuid": "e22102a9-6c8c-4149-a70a-44dfdde4c4c1", 00:14:15.177 "strip_size_kb": 64, 00:14:15.177 "state": "configuring", 00:14:15.177 "raid_level": "concat", 00:14:15.177 "superblock": true, 00:14:15.177 "num_base_bdevs": 4, 00:14:15.177 "num_base_bdevs_discovered": 0, 00:14:15.177 "num_base_bdevs_operational": 4, 00:14:15.177 "base_bdevs_list": [ 00:14:15.177 { 00:14:15.177 "name": "BaseBdev1", 00:14:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.177 "is_configured": false, 00:14:15.177 "data_offset": 0, 00:14:15.177 "data_size": 0 00:14:15.177 }, 00:14:15.177 { 00:14:15.177 "name": "BaseBdev2", 00:14:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.177 "is_configured": false, 00:14:15.177 "data_offset": 0, 00:14:15.177 "data_size": 0 00:14:15.177 }, 00:14:15.177 { 00:14:15.177 "name": "BaseBdev3", 00:14:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.177 "is_configured": false, 00:14:15.177 "data_offset": 0, 00:14:15.177 "data_size": 0 00:14:15.177 }, 00:14:15.177 { 00:14:15.177 "name": "BaseBdev4", 00:14:15.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.177 "is_configured": false, 00:14:15.177 "data_offset": 0, 00:14:15.177 "data_size": 0 00:14:15.177 } 00:14:15.177 ] 00:14:15.177 }' 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.177 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 [2024-11-26 06:23:59.506236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.437 [2024-11-26 06:23:59.506393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 [2024-11-26 06:23:59.518227] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.437 [2024-11-26 06:23:59.518387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.437 [2024-11-26 06:23:59.518451] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.437 [2024-11-26 06:23:59.518498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.437 [2024-11-26 06:23:59.518538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:15.437 [2024-11-26 06:23:59.518587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:15.437 [2024-11-26 06:23:59.518626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:15.437 [2024-11-26 06:23:59.518671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.437 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 [2024-11-26 06:23:59.572598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.698 BaseBdev1 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 [ 00:14:15.698 { 00:14:15.698 "name": "BaseBdev1", 00:14:15.698 "aliases": [ 00:14:15.698 "0e44d96f-9eba-4218-93cf-b2d97cdb0b31" 00:14:15.698 ], 00:14:15.698 "product_name": "Malloc disk", 00:14:15.698 "block_size": 512, 00:14:15.698 "num_blocks": 65536, 00:14:15.698 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:15.698 "assigned_rate_limits": { 00:14:15.698 "rw_ios_per_sec": 0, 00:14:15.698 "rw_mbytes_per_sec": 0, 00:14:15.698 "r_mbytes_per_sec": 0, 00:14:15.698 "w_mbytes_per_sec": 0 00:14:15.698 }, 00:14:15.698 "claimed": true, 00:14:15.698 "claim_type": "exclusive_write", 00:14:15.698 "zoned": false, 00:14:15.698 "supported_io_types": { 00:14:15.698 "read": true, 00:14:15.698 "write": true, 00:14:15.698 "unmap": true, 00:14:15.698 "flush": true, 00:14:15.698 "reset": true, 00:14:15.698 "nvme_admin": false, 00:14:15.698 "nvme_io": false, 00:14:15.698 "nvme_io_md": false, 00:14:15.698 "write_zeroes": true, 00:14:15.698 "zcopy": true, 00:14:15.698 "get_zone_info": false, 00:14:15.698 "zone_management": false, 00:14:15.698 "zone_append": false, 00:14:15.698 "compare": false, 00:14:15.698 "compare_and_write": false, 00:14:15.698 "abort": true, 00:14:15.698 "seek_hole": false, 00:14:15.698 "seek_data": false, 00:14:15.698 "copy": true, 00:14:15.698 "nvme_iov_md": false 00:14:15.698 }, 00:14:15.698 "memory_domains": [ 00:14:15.698 { 00:14:15.698 "dma_device_id": "system", 00:14:15.698 "dma_device_type": 1 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.698 "dma_device_type": 2 00:14:15.698 } 00:14:15.698 ], 00:14:15.698 "driver_specific": {} 00:14:15.698 } 00:14:15.698 ] 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.698 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.698 "name": "Existed_Raid", 00:14:15.698 "uuid": "36e1f0dd-2417-4bbd-b418-1b56e168fea8", 00:14:15.698 "strip_size_kb": 64, 00:14:15.698 "state": "configuring", 00:14:15.698 "raid_level": "concat", 00:14:15.698 "superblock": true, 00:14:15.698 "num_base_bdevs": 4, 00:14:15.698 "num_base_bdevs_discovered": 1, 00:14:15.698 "num_base_bdevs_operational": 4, 00:14:15.698 "base_bdevs_list": [ 00:14:15.698 { 00:14:15.698 "name": "BaseBdev1", 00:14:15.698 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:15.698 "is_configured": true, 00:14:15.698 "data_offset": 2048, 00:14:15.698 "data_size": 63488 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "name": "BaseBdev2", 00:14:15.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.698 "is_configured": false, 00:14:15.698 "data_offset": 0, 00:14:15.698 "data_size": 0 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "name": "BaseBdev3", 00:14:15.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.698 "is_configured": false, 00:14:15.698 "data_offset": 0, 00:14:15.698 "data_size": 0 00:14:15.698 }, 00:14:15.698 { 00:14:15.698 "name": "BaseBdev4", 00:14:15.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.698 "is_configured": false, 00:14:15.698 "data_offset": 0, 00:14:15.699 "data_size": 0 00:14:15.699 } 00:14:15.699 ] 00:14:15.699 }' 00:14:15.699 06:23:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.699 06:23:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.958 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:15.958 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.958 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.958 [2024-11-26 06:24:00.083942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:15.958 [2024-11-26 06:24:00.084162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:15.958 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.218 [2024-11-26 06:24:00.096078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.218 [2024-11-26 06:24:00.098514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.218 [2024-11-26 06:24:00.098625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.218 [2024-11-26 06:24:00.098674] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:16.218 [2024-11-26 06:24:00.098721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:16.218 [2024-11-26 06:24:00.098751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:16.218 [2024-11-26 06:24:00.098785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.218 "name": "Existed_Raid", 00:14:16.218 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:16.218 "strip_size_kb": 64, 00:14:16.218 "state": "configuring", 00:14:16.218 "raid_level": "concat", 00:14:16.218 "superblock": true, 00:14:16.218 "num_base_bdevs": 4, 00:14:16.218 "num_base_bdevs_discovered": 1, 00:14:16.218 "num_base_bdevs_operational": 4, 00:14:16.218 "base_bdevs_list": [ 00:14:16.218 { 00:14:16.218 "name": "BaseBdev1", 00:14:16.218 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:16.218 "is_configured": true, 00:14:16.218 "data_offset": 2048, 00:14:16.218 "data_size": 63488 00:14:16.218 }, 00:14:16.218 { 00:14:16.218 "name": "BaseBdev2", 00:14:16.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.218 "is_configured": false, 00:14:16.218 "data_offset": 0, 00:14:16.218 "data_size": 0 00:14:16.218 }, 00:14:16.218 { 00:14:16.218 "name": "BaseBdev3", 00:14:16.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.218 "is_configured": false, 00:14:16.218 "data_offset": 0, 00:14:16.218 "data_size": 0 00:14:16.218 }, 00:14:16.218 { 00:14:16.218 "name": "BaseBdev4", 00:14:16.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.218 "is_configured": false, 00:14:16.218 "data_offset": 0, 00:14:16.218 "data_size": 0 00:14:16.218 } 00:14:16.218 ] 00:14:16.218 }' 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.218 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.478 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:16.478 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.478 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.739 [2024-11-26 06:24:00.614791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.739 BaseBdev2 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.739 [ 00:14:16.739 { 00:14:16.739 "name": "BaseBdev2", 00:14:16.739 "aliases": [ 00:14:16.739 "f0377b0f-49c7-4c2e-a41e-a8b96afd210a" 00:14:16.739 ], 00:14:16.739 "product_name": "Malloc disk", 00:14:16.739 "block_size": 512, 00:14:16.739 "num_blocks": 65536, 00:14:16.739 "uuid": "f0377b0f-49c7-4c2e-a41e-a8b96afd210a", 00:14:16.739 "assigned_rate_limits": { 00:14:16.739 "rw_ios_per_sec": 0, 00:14:16.739 "rw_mbytes_per_sec": 0, 00:14:16.739 "r_mbytes_per_sec": 0, 00:14:16.739 "w_mbytes_per_sec": 0 00:14:16.739 }, 00:14:16.739 "claimed": true, 00:14:16.739 "claim_type": "exclusive_write", 00:14:16.739 "zoned": false, 00:14:16.739 "supported_io_types": { 00:14:16.739 "read": true, 00:14:16.739 "write": true, 00:14:16.739 "unmap": true, 00:14:16.739 "flush": true, 00:14:16.739 "reset": true, 00:14:16.739 "nvme_admin": false, 00:14:16.739 "nvme_io": false, 00:14:16.739 "nvme_io_md": false, 00:14:16.739 "write_zeroes": true, 00:14:16.739 "zcopy": true, 00:14:16.739 "get_zone_info": false, 00:14:16.739 "zone_management": false, 00:14:16.739 "zone_append": false, 00:14:16.739 "compare": false, 00:14:16.739 "compare_and_write": false, 00:14:16.739 "abort": true, 00:14:16.739 "seek_hole": false, 00:14:16.739 "seek_data": false, 00:14:16.739 "copy": true, 00:14:16.739 "nvme_iov_md": false 00:14:16.739 }, 00:14:16.739 "memory_domains": [ 00:14:16.739 { 00:14:16.739 "dma_device_id": "system", 00:14:16.739 "dma_device_type": 1 00:14:16.739 }, 00:14:16.739 { 00:14:16.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.739 "dma_device_type": 2 00:14:16.739 } 00:14:16.739 ], 00:14:16.739 "driver_specific": {} 00:14:16.739 } 00:14:16.739 ] 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.739 "name": "Existed_Raid", 00:14:16.739 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:16.739 "strip_size_kb": 64, 00:14:16.739 "state": "configuring", 00:14:16.739 "raid_level": "concat", 00:14:16.739 "superblock": true, 00:14:16.739 "num_base_bdevs": 4, 00:14:16.739 "num_base_bdevs_discovered": 2, 00:14:16.739 "num_base_bdevs_operational": 4, 00:14:16.739 "base_bdevs_list": [ 00:14:16.739 { 00:14:16.739 "name": "BaseBdev1", 00:14:16.739 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:16.739 "is_configured": true, 00:14:16.739 "data_offset": 2048, 00:14:16.739 "data_size": 63488 00:14:16.739 }, 00:14:16.739 { 00:14:16.739 "name": "BaseBdev2", 00:14:16.739 "uuid": "f0377b0f-49c7-4c2e-a41e-a8b96afd210a", 00:14:16.739 "is_configured": true, 00:14:16.739 "data_offset": 2048, 00:14:16.739 "data_size": 63488 00:14:16.739 }, 00:14:16.739 { 00:14:16.739 "name": "BaseBdev3", 00:14:16.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.739 "is_configured": false, 00:14:16.739 "data_offset": 0, 00:14:16.739 "data_size": 0 00:14:16.739 }, 00:14:16.739 { 00:14:16.739 "name": "BaseBdev4", 00:14:16.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.739 "is_configured": false, 00:14:16.739 "data_offset": 0, 00:14:16.739 "data_size": 0 00:14:16.739 } 00:14:16.739 ] 00:14:16.739 }' 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.739 06:24:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.998 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:16.998 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.998 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.257 [2024-11-26 06:24:01.157850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.257 BaseBdev3 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.257 [ 00:14:17.257 { 00:14:17.257 "name": "BaseBdev3", 00:14:17.257 "aliases": [ 00:14:17.257 "55cbfda2-22be-47a8-80b1-200ee6ccd542" 00:14:17.257 ], 00:14:17.257 "product_name": "Malloc disk", 00:14:17.257 "block_size": 512, 00:14:17.257 "num_blocks": 65536, 00:14:17.257 "uuid": "55cbfda2-22be-47a8-80b1-200ee6ccd542", 00:14:17.257 "assigned_rate_limits": { 00:14:17.257 "rw_ios_per_sec": 0, 00:14:17.257 "rw_mbytes_per_sec": 0, 00:14:17.257 "r_mbytes_per_sec": 0, 00:14:17.257 "w_mbytes_per_sec": 0 00:14:17.257 }, 00:14:17.257 "claimed": true, 00:14:17.257 "claim_type": "exclusive_write", 00:14:17.257 "zoned": false, 00:14:17.257 "supported_io_types": { 00:14:17.257 "read": true, 00:14:17.257 "write": true, 00:14:17.257 "unmap": true, 00:14:17.257 "flush": true, 00:14:17.257 "reset": true, 00:14:17.257 "nvme_admin": false, 00:14:17.257 "nvme_io": false, 00:14:17.257 "nvme_io_md": false, 00:14:17.257 "write_zeroes": true, 00:14:17.257 "zcopy": true, 00:14:17.257 "get_zone_info": false, 00:14:17.257 "zone_management": false, 00:14:17.257 "zone_append": false, 00:14:17.257 "compare": false, 00:14:17.257 "compare_and_write": false, 00:14:17.257 "abort": true, 00:14:17.257 "seek_hole": false, 00:14:17.257 "seek_data": false, 00:14:17.257 "copy": true, 00:14:17.257 "nvme_iov_md": false 00:14:17.257 }, 00:14:17.257 "memory_domains": [ 00:14:17.257 { 00:14:17.257 "dma_device_id": "system", 00:14:17.257 "dma_device_type": 1 00:14:17.257 }, 00:14:17.257 { 00:14:17.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.257 "dma_device_type": 2 00:14:17.257 } 00:14:17.257 ], 00:14:17.257 "driver_specific": {} 00:14:17.257 } 00:14:17.257 ] 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.257 "name": "Existed_Raid", 00:14:17.257 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:17.257 "strip_size_kb": 64, 00:14:17.257 "state": "configuring", 00:14:17.257 "raid_level": "concat", 00:14:17.257 "superblock": true, 00:14:17.257 "num_base_bdevs": 4, 00:14:17.257 "num_base_bdevs_discovered": 3, 00:14:17.257 "num_base_bdevs_operational": 4, 00:14:17.257 "base_bdevs_list": [ 00:14:17.257 { 00:14:17.257 "name": "BaseBdev1", 00:14:17.257 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:17.257 "is_configured": true, 00:14:17.257 "data_offset": 2048, 00:14:17.257 "data_size": 63488 00:14:17.257 }, 00:14:17.257 { 00:14:17.257 "name": "BaseBdev2", 00:14:17.257 "uuid": "f0377b0f-49c7-4c2e-a41e-a8b96afd210a", 00:14:17.257 "is_configured": true, 00:14:17.257 "data_offset": 2048, 00:14:17.257 "data_size": 63488 00:14:17.257 }, 00:14:17.257 { 00:14:17.257 "name": "BaseBdev3", 00:14:17.257 "uuid": "55cbfda2-22be-47a8-80b1-200ee6ccd542", 00:14:17.257 "is_configured": true, 00:14:17.257 "data_offset": 2048, 00:14:17.257 "data_size": 63488 00:14:17.257 }, 00:14:17.257 { 00:14:17.257 "name": "BaseBdev4", 00:14:17.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.257 "is_configured": false, 00:14:17.257 "data_offset": 0, 00:14:17.257 "data_size": 0 00:14:17.257 } 00:14:17.257 ] 00:14:17.257 }' 00:14:17.257 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.258 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.828 [2024-11-26 06:24:01.715918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:17.828 [2024-11-26 06:24:01.716412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:17.828 [2024-11-26 06:24:01.716475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:17.828 BaseBdev4 00:14:17.828 [2024-11-26 06:24:01.716827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:17.828 [2024-11-26 06:24:01.717011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:17.828 [2024-11-26 06:24:01.717097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:17.828 [2024-11-26 06:24:01.717356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.828 [ 00:14:17.828 { 00:14:17.828 "name": "BaseBdev4", 00:14:17.828 "aliases": [ 00:14:17.828 "0e75c0d4-0c24-4e09-baef-c32d9561a82e" 00:14:17.828 ], 00:14:17.828 "product_name": "Malloc disk", 00:14:17.828 "block_size": 512, 00:14:17.828 "num_blocks": 65536, 00:14:17.828 "uuid": "0e75c0d4-0c24-4e09-baef-c32d9561a82e", 00:14:17.828 "assigned_rate_limits": { 00:14:17.828 "rw_ios_per_sec": 0, 00:14:17.828 "rw_mbytes_per_sec": 0, 00:14:17.828 "r_mbytes_per_sec": 0, 00:14:17.828 "w_mbytes_per_sec": 0 00:14:17.828 }, 00:14:17.828 "claimed": true, 00:14:17.828 "claim_type": "exclusive_write", 00:14:17.828 "zoned": false, 00:14:17.828 "supported_io_types": { 00:14:17.828 "read": true, 00:14:17.828 "write": true, 00:14:17.828 "unmap": true, 00:14:17.828 "flush": true, 00:14:17.828 "reset": true, 00:14:17.828 "nvme_admin": false, 00:14:17.828 "nvme_io": false, 00:14:17.828 "nvme_io_md": false, 00:14:17.828 "write_zeroes": true, 00:14:17.828 "zcopy": true, 00:14:17.828 "get_zone_info": false, 00:14:17.828 "zone_management": false, 00:14:17.828 "zone_append": false, 00:14:17.828 "compare": false, 00:14:17.828 "compare_and_write": false, 00:14:17.828 "abort": true, 00:14:17.828 "seek_hole": false, 00:14:17.828 "seek_data": false, 00:14:17.828 "copy": true, 00:14:17.828 "nvme_iov_md": false 00:14:17.828 }, 00:14:17.828 "memory_domains": [ 00:14:17.828 { 00:14:17.828 "dma_device_id": "system", 00:14:17.828 "dma_device_type": 1 00:14:17.828 }, 00:14:17.828 { 00:14:17.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.828 "dma_device_type": 2 00:14:17.828 } 00:14:17.828 ], 00:14:17.828 "driver_specific": {} 00:14:17.828 } 00:14:17.828 ] 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.828 "name": "Existed_Raid", 00:14:17.828 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:17.828 "strip_size_kb": 64, 00:14:17.828 "state": "online", 00:14:17.828 "raid_level": "concat", 00:14:17.828 "superblock": true, 00:14:17.828 "num_base_bdevs": 4, 00:14:17.828 "num_base_bdevs_discovered": 4, 00:14:17.828 "num_base_bdevs_operational": 4, 00:14:17.828 "base_bdevs_list": [ 00:14:17.828 { 00:14:17.828 "name": "BaseBdev1", 00:14:17.828 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 }, 00:14:17.828 { 00:14:17.828 "name": "BaseBdev2", 00:14:17.828 "uuid": "f0377b0f-49c7-4c2e-a41e-a8b96afd210a", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 }, 00:14:17.828 { 00:14:17.828 "name": "BaseBdev3", 00:14:17.828 "uuid": "55cbfda2-22be-47a8-80b1-200ee6ccd542", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 }, 00:14:17.828 { 00:14:17.828 "name": "BaseBdev4", 00:14:17.828 "uuid": "0e75c0d4-0c24-4e09-baef-c32d9561a82e", 00:14:17.828 "is_configured": true, 00:14:17.828 "data_offset": 2048, 00:14:17.828 "data_size": 63488 00:14:17.828 } 00:14:17.828 ] 00:14:17.828 }' 00:14:17.828 06:24:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.829 06:24:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.089 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.089 [2024-11-26 06:24:02.207577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.350 "name": "Existed_Raid", 00:14:18.350 "aliases": [ 00:14:18.350 "fb0ce534-7d25-443d-9e48-0782c2b44e8a" 00:14:18.350 ], 00:14:18.350 "product_name": "Raid Volume", 00:14:18.350 "block_size": 512, 00:14:18.350 "num_blocks": 253952, 00:14:18.350 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:18.350 "assigned_rate_limits": { 00:14:18.350 "rw_ios_per_sec": 0, 00:14:18.350 "rw_mbytes_per_sec": 0, 00:14:18.350 "r_mbytes_per_sec": 0, 00:14:18.350 "w_mbytes_per_sec": 0 00:14:18.350 }, 00:14:18.350 "claimed": false, 00:14:18.350 "zoned": false, 00:14:18.350 "supported_io_types": { 00:14:18.350 "read": true, 00:14:18.350 "write": true, 00:14:18.350 "unmap": true, 00:14:18.350 "flush": true, 00:14:18.350 "reset": true, 00:14:18.350 "nvme_admin": false, 00:14:18.350 "nvme_io": false, 00:14:18.350 "nvme_io_md": false, 00:14:18.350 "write_zeroes": true, 00:14:18.350 "zcopy": false, 00:14:18.350 "get_zone_info": false, 00:14:18.350 "zone_management": false, 00:14:18.350 "zone_append": false, 00:14:18.350 "compare": false, 00:14:18.350 "compare_and_write": false, 00:14:18.350 "abort": false, 00:14:18.350 "seek_hole": false, 00:14:18.350 "seek_data": false, 00:14:18.350 "copy": false, 00:14:18.350 "nvme_iov_md": false 00:14:18.350 }, 00:14:18.350 "memory_domains": [ 00:14:18.350 { 00:14:18.350 "dma_device_id": "system", 00:14:18.350 "dma_device_type": 1 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.350 "dma_device_type": 2 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "system", 00:14:18.350 "dma_device_type": 1 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.350 "dma_device_type": 2 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "system", 00:14:18.350 "dma_device_type": 1 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.350 "dma_device_type": 2 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "system", 00:14:18.350 "dma_device_type": 1 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.350 "dma_device_type": 2 00:14:18.350 } 00:14:18.350 ], 00:14:18.350 "driver_specific": { 00:14:18.350 "raid": { 00:14:18.350 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:18.350 "strip_size_kb": 64, 00:14:18.350 "state": "online", 00:14:18.350 "raid_level": "concat", 00:14:18.350 "superblock": true, 00:14:18.350 "num_base_bdevs": 4, 00:14:18.350 "num_base_bdevs_discovered": 4, 00:14:18.350 "num_base_bdevs_operational": 4, 00:14:18.350 "base_bdevs_list": [ 00:14:18.350 { 00:14:18.350 "name": "BaseBdev1", 00:14:18.350 "uuid": "0e44d96f-9eba-4218-93cf-b2d97cdb0b31", 00:14:18.350 "is_configured": true, 00:14:18.350 "data_offset": 2048, 00:14:18.350 "data_size": 63488 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "name": "BaseBdev2", 00:14:18.350 "uuid": "f0377b0f-49c7-4c2e-a41e-a8b96afd210a", 00:14:18.350 "is_configured": true, 00:14:18.350 "data_offset": 2048, 00:14:18.350 "data_size": 63488 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "name": "BaseBdev3", 00:14:18.350 "uuid": "55cbfda2-22be-47a8-80b1-200ee6ccd542", 00:14:18.350 "is_configured": true, 00:14:18.350 "data_offset": 2048, 00:14:18.350 "data_size": 63488 00:14:18.350 }, 00:14:18.350 { 00:14:18.350 "name": "BaseBdev4", 00:14:18.350 "uuid": "0e75c0d4-0c24-4e09-baef-c32d9561a82e", 00:14:18.350 "is_configured": true, 00:14:18.350 "data_offset": 2048, 00:14:18.350 "data_size": 63488 00:14:18.350 } 00:14:18.350 ] 00:14:18.350 } 00:14:18.350 } 00:14:18.350 }' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:18.350 BaseBdev2 00:14:18.350 BaseBdev3 00:14:18.350 BaseBdev4' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.350 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.611 [2024-11-26 06:24:02.570659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.611 [2024-11-26 06:24:02.570811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.611 [2024-11-26 06:24:02.570901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.611 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.871 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.871 "name": "Existed_Raid", 00:14:18.871 "uuid": "fb0ce534-7d25-443d-9e48-0782c2b44e8a", 00:14:18.871 "strip_size_kb": 64, 00:14:18.871 "state": "offline", 00:14:18.871 "raid_level": "concat", 00:14:18.871 "superblock": true, 00:14:18.871 "num_base_bdevs": 4, 00:14:18.871 "num_base_bdevs_discovered": 3, 00:14:18.871 "num_base_bdevs_operational": 3, 00:14:18.871 "base_bdevs_list": [ 00:14:18.871 { 00:14:18.871 "name": null, 00:14:18.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.871 "is_configured": false, 00:14:18.871 "data_offset": 0, 00:14:18.871 "data_size": 63488 00:14:18.871 }, 00:14:18.871 { 00:14:18.871 "name": "BaseBdev2", 00:14:18.871 "uuid": "f0377b0f-49c7-4c2e-a41e-a8b96afd210a", 00:14:18.871 "is_configured": true, 00:14:18.871 "data_offset": 2048, 00:14:18.871 "data_size": 63488 00:14:18.871 }, 00:14:18.871 { 00:14:18.871 "name": "BaseBdev3", 00:14:18.871 "uuid": "55cbfda2-22be-47a8-80b1-200ee6ccd542", 00:14:18.871 "is_configured": true, 00:14:18.871 "data_offset": 2048, 00:14:18.871 "data_size": 63488 00:14:18.871 }, 00:14:18.871 { 00:14:18.871 "name": "BaseBdev4", 00:14:18.871 "uuid": "0e75c0d4-0c24-4e09-baef-c32d9561a82e", 00:14:18.871 "is_configured": true, 00:14:18.871 "data_offset": 2048, 00:14:18.871 "data_size": 63488 00:14:18.871 } 00:14:18.871 ] 00:14:18.871 }' 00:14:18.871 06:24:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.871 06:24:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.130 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.130 [2024-11-26 06:24:03.240343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.389 [2024-11-26 06:24:03.401071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.389 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.648 [2024-11-26 06:24:03.563315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:19.648 [2024-11-26 06:24:03.563433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.648 BaseBdev2 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.648 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.908 [ 00:14:19.908 { 00:14:19.908 "name": "BaseBdev2", 00:14:19.908 "aliases": [ 00:14:19.908 "6828f6d8-ee43-40f3-bcde-f300580cfdd7" 00:14:19.908 ], 00:14:19.908 "product_name": "Malloc disk", 00:14:19.908 "block_size": 512, 00:14:19.908 "num_blocks": 65536, 00:14:19.908 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:19.908 "assigned_rate_limits": { 00:14:19.908 "rw_ios_per_sec": 0, 00:14:19.908 "rw_mbytes_per_sec": 0, 00:14:19.908 "r_mbytes_per_sec": 0, 00:14:19.908 "w_mbytes_per_sec": 0 00:14:19.908 }, 00:14:19.908 "claimed": false, 00:14:19.908 "zoned": false, 00:14:19.908 "supported_io_types": { 00:14:19.908 "read": true, 00:14:19.908 "write": true, 00:14:19.908 "unmap": true, 00:14:19.908 "flush": true, 00:14:19.908 "reset": true, 00:14:19.908 "nvme_admin": false, 00:14:19.908 "nvme_io": false, 00:14:19.908 "nvme_io_md": false, 00:14:19.908 "write_zeroes": true, 00:14:19.908 "zcopy": true, 00:14:19.908 "get_zone_info": false, 00:14:19.908 "zone_management": false, 00:14:19.908 "zone_append": false, 00:14:19.908 "compare": false, 00:14:19.908 "compare_and_write": false, 00:14:19.908 "abort": true, 00:14:19.908 "seek_hole": false, 00:14:19.908 "seek_data": false, 00:14:19.908 "copy": true, 00:14:19.908 "nvme_iov_md": false 00:14:19.908 }, 00:14:19.908 "memory_domains": [ 00:14:19.908 { 00:14:19.908 "dma_device_id": "system", 00:14:19.908 "dma_device_type": 1 00:14:19.908 }, 00:14:19.908 { 00:14:19.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.908 "dma_device_type": 2 00:14:19.908 } 00:14:19.908 ], 00:14:19.908 "driver_specific": {} 00:14:19.908 } 00:14:19.908 ] 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.908 BaseBdev3 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.908 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.908 [ 00:14:19.908 { 00:14:19.908 "name": "BaseBdev3", 00:14:19.908 "aliases": [ 00:14:19.908 "7587235d-e7a2-4198-b0bf-a597e6c92f03" 00:14:19.908 ], 00:14:19.908 "product_name": "Malloc disk", 00:14:19.908 "block_size": 512, 00:14:19.908 "num_blocks": 65536, 00:14:19.908 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:19.908 "assigned_rate_limits": { 00:14:19.908 "rw_ios_per_sec": 0, 00:14:19.908 "rw_mbytes_per_sec": 0, 00:14:19.908 "r_mbytes_per_sec": 0, 00:14:19.908 "w_mbytes_per_sec": 0 00:14:19.908 }, 00:14:19.908 "claimed": false, 00:14:19.908 "zoned": false, 00:14:19.908 "supported_io_types": { 00:14:19.908 "read": true, 00:14:19.908 "write": true, 00:14:19.908 "unmap": true, 00:14:19.908 "flush": true, 00:14:19.908 "reset": true, 00:14:19.908 "nvme_admin": false, 00:14:19.908 "nvme_io": false, 00:14:19.908 "nvme_io_md": false, 00:14:19.908 "write_zeroes": true, 00:14:19.908 "zcopy": true, 00:14:19.908 "get_zone_info": false, 00:14:19.908 "zone_management": false, 00:14:19.908 "zone_append": false, 00:14:19.909 "compare": false, 00:14:19.909 "compare_and_write": false, 00:14:19.909 "abort": true, 00:14:19.909 "seek_hole": false, 00:14:19.909 "seek_data": false, 00:14:19.909 "copy": true, 00:14:19.909 "nvme_iov_md": false 00:14:19.909 }, 00:14:19.909 "memory_domains": [ 00:14:19.909 { 00:14:19.909 "dma_device_id": "system", 00:14:19.909 "dma_device_type": 1 00:14:19.909 }, 00:14:19.909 { 00:14:19.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.909 "dma_device_type": 2 00:14:19.909 } 00:14:19.909 ], 00:14:19.909 "driver_specific": {} 00:14:19.909 } 00:14:19.909 ] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.909 BaseBdev4 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.909 [ 00:14:19.909 { 00:14:19.909 "name": "BaseBdev4", 00:14:19.909 "aliases": [ 00:14:19.909 "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a" 00:14:19.909 ], 00:14:19.909 "product_name": "Malloc disk", 00:14:19.909 "block_size": 512, 00:14:19.909 "num_blocks": 65536, 00:14:19.909 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:19.909 "assigned_rate_limits": { 00:14:19.909 "rw_ios_per_sec": 0, 00:14:19.909 "rw_mbytes_per_sec": 0, 00:14:19.909 "r_mbytes_per_sec": 0, 00:14:19.909 "w_mbytes_per_sec": 0 00:14:19.909 }, 00:14:19.909 "claimed": false, 00:14:19.909 "zoned": false, 00:14:19.909 "supported_io_types": { 00:14:19.909 "read": true, 00:14:19.909 "write": true, 00:14:19.909 "unmap": true, 00:14:19.909 "flush": true, 00:14:19.909 "reset": true, 00:14:19.909 "nvme_admin": false, 00:14:19.909 "nvme_io": false, 00:14:19.909 "nvme_io_md": false, 00:14:19.909 "write_zeroes": true, 00:14:19.909 "zcopy": true, 00:14:19.909 "get_zone_info": false, 00:14:19.909 "zone_management": false, 00:14:19.909 "zone_append": false, 00:14:19.909 "compare": false, 00:14:19.909 "compare_and_write": false, 00:14:19.909 "abort": true, 00:14:19.909 "seek_hole": false, 00:14:19.909 "seek_data": false, 00:14:19.909 "copy": true, 00:14:19.909 "nvme_iov_md": false 00:14:19.909 }, 00:14:19.909 "memory_domains": [ 00:14:19.909 { 00:14:19.909 "dma_device_id": "system", 00:14:19.909 "dma_device_type": 1 00:14:19.909 }, 00:14:19.909 { 00:14:19.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.909 "dma_device_type": 2 00:14:19.909 } 00:14:19.909 ], 00:14:19.909 "driver_specific": {} 00:14:19.909 } 00:14:19.909 ] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.909 [2024-11-26 06:24:03.976568] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.909 [2024-11-26 06:24:03.976736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.909 [2024-11-26 06:24:03.976801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.909 [2024-11-26 06:24:03.979088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.909 [2024-11-26 06:24:03.979212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.909 06:24:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.909 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.910 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.910 "name": "Existed_Raid", 00:14:19.910 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:19.910 "strip_size_kb": 64, 00:14:19.910 "state": "configuring", 00:14:19.910 "raid_level": "concat", 00:14:19.910 "superblock": true, 00:14:19.910 "num_base_bdevs": 4, 00:14:19.910 "num_base_bdevs_discovered": 3, 00:14:19.910 "num_base_bdevs_operational": 4, 00:14:19.910 "base_bdevs_list": [ 00:14:19.910 { 00:14:19.910 "name": "BaseBdev1", 00:14:19.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.910 "is_configured": false, 00:14:19.910 "data_offset": 0, 00:14:19.910 "data_size": 0 00:14:19.910 }, 00:14:19.910 { 00:14:19.910 "name": "BaseBdev2", 00:14:19.910 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:19.910 "is_configured": true, 00:14:19.910 "data_offset": 2048, 00:14:19.910 "data_size": 63488 00:14:19.910 }, 00:14:19.910 { 00:14:19.910 "name": "BaseBdev3", 00:14:19.910 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:19.910 "is_configured": true, 00:14:19.910 "data_offset": 2048, 00:14:19.910 "data_size": 63488 00:14:19.910 }, 00:14:19.910 { 00:14:19.910 "name": "BaseBdev4", 00:14:19.910 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:19.910 "is_configured": true, 00:14:19.910 "data_offset": 2048, 00:14:19.910 "data_size": 63488 00:14:19.910 } 00:14:19.910 ] 00:14:19.910 }' 00:14:19.910 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.910 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.478 [2024-11-26 06:24:04.435876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.478 "name": "Existed_Raid", 00:14:20.478 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:20.478 "strip_size_kb": 64, 00:14:20.478 "state": "configuring", 00:14:20.478 "raid_level": "concat", 00:14:20.478 "superblock": true, 00:14:20.478 "num_base_bdevs": 4, 00:14:20.478 "num_base_bdevs_discovered": 2, 00:14:20.478 "num_base_bdevs_operational": 4, 00:14:20.478 "base_bdevs_list": [ 00:14:20.478 { 00:14:20.478 "name": "BaseBdev1", 00:14:20.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.478 "is_configured": false, 00:14:20.478 "data_offset": 0, 00:14:20.478 "data_size": 0 00:14:20.478 }, 00:14:20.478 { 00:14:20.478 "name": null, 00:14:20.478 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:20.478 "is_configured": false, 00:14:20.478 "data_offset": 0, 00:14:20.478 "data_size": 63488 00:14:20.478 }, 00:14:20.478 { 00:14:20.478 "name": "BaseBdev3", 00:14:20.478 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:20.478 "is_configured": true, 00:14:20.478 "data_offset": 2048, 00:14:20.478 "data_size": 63488 00:14:20.478 }, 00:14:20.478 { 00:14:20.478 "name": "BaseBdev4", 00:14:20.478 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:20.478 "is_configured": true, 00:14:20.478 "data_offset": 2048, 00:14:20.478 "data_size": 63488 00:14:20.478 } 00:14:20.478 ] 00:14:20.478 }' 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.478 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.047 [2024-11-26 06:24:04.974525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.047 BaseBdev1 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.047 06:24:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.047 [ 00:14:21.047 { 00:14:21.047 "name": "BaseBdev1", 00:14:21.047 "aliases": [ 00:14:21.047 "c5f6e0ac-8339-4052-9223-edb3f023eef1" 00:14:21.047 ], 00:14:21.047 "product_name": "Malloc disk", 00:14:21.047 "block_size": 512, 00:14:21.047 "num_blocks": 65536, 00:14:21.047 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:21.047 "assigned_rate_limits": { 00:14:21.047 "rw_ios_per_sec": 0, 00:14:21.047 "rw_mbytes_per_sec": 0, 00:14:21.047 "r_mbytes_per_sec": 0, 00:14:21.047 "w_mbytes_per_sec": 0 00:14:21.047 }, 00:14:21.047 "claimed": true, 00:14:21.047 "claim_type": "exclusive_write", 00:14:21.047 "zoned": false, 00:14:21.047 "supported_io_types": { 00:14:21.047 "read": true, 00:14:21.047 "write": true, 00:14:21.047 "unmap": true, 00:14:21.047 "flush": true, 00:14:21.047 "reset": true, 00:14:21.047 "nvme_admin": false, 00:14:21.047 "nvme_io": false, 00:14:21.047 "nvme_io_md": false, 00:14:21.047 "write_zeroes": true, 00:14:21.047 "zcopy": true, 00:14:21.047 "get_zone_info": false, 00:14:21.047 "zone_management": false, 00:14:21.047 "zone_append": false, 00:14:21.047 "compare": false, 00:14:21.047 "compare_and_write": false, 00:14:21.047 "abort": true, 00:14:21.047 "seek_hole": false, 00:14:21.047 "seek_data": false, 00:14:21.048 "copy": true, 00:14:21.048 "nvme_iov_md": false 00:14:21.048 }, 00:14:21.048 "memory_domains": [ 00:14:21.048 { 00:14:21.048 "dma_device_id": "system", 00:14:21.048 "dma_device_type": 1 00:14:21.048 }, 00:14:21.048 { 00:14:21.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.048 "dma_device_type": 2 00:14:21.048 } 00:14:21.048 ], 00:14:21.048 "driver_specific": {} 00:14:21.048 } 00:14:21.048 ] 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.048 "name": "Existed_Raid", 00:14:21.048 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:21.048 "strip_size_kb": 64, 00:14:21.048 "state": "configuring", 00:14:21.048 "raid_level": "concat", 00:14:21.048 "superblock": true, 00:14:21.048 "num_base_bdevs": 4, 00:14:21.048 "num_base_bdevs_discovered": 3, 00:14:21.048 "num_base_bdevs_operational": 4, 00:14:21.048 "base_bdevs_list": [ 00:14:21.048 { 00:14:21.048 "name": "BaseBdev1", 00:14:21.048 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:21.048 "is_configured": true, 00:14:21.048 "data_offset": 2048, 00:14:21.048 "data_size": 63488 00:14:21.048 }, 00:14:21.048 { 00:14:21.048 "name": null, 00:14:21.048 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:21.048 "is_configured": false, 00:14:21.048 "data_offset": 0, 00:14:21.048 "data_size": 63488 00:14:21.048 }, 00:14:21.048 { 00:14:21.048 "name": "BaseBdev3", 00:14:21.048 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:21.048 "is_configured": true, 00:14:21.048 "data_offset": 2048, 00:14:21.048 "data_size": 63488 00:14:21.048 }, 00:14:21.048 { 00:14:21.048 "name": "BaseBdev4", 00:14:21.048 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:21.048 "is_configured": true, 00:14:21.048 "data_offset": 2048, 00:14:21.048 "data_size": 63488 00:14:21.048 } 00:14:21.048 ] 00:14:21.048 }' 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.048 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.417 [2024-11-26 06:24:05.541758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:21.417 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.418 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.682 "name": "Existed_Raid", 00:14:21.682 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:21.682 "strip_size_kb": 64, 00:14:21.682 "state": "configuring", 00:14:21.682 "raid_level": "concat", 00:14:21.682 "superblock": true, 00:14:21.682 "num_base_bdevs": 4, 00:14:21.682 "num_base_bdevs_discovered": 2, 00:14:21.682 "num_base_bdevs_operational": 4, 00:14:21.682 "base_bdevs_list": [ 00:14:21.682 { 00:14:21.682 "name": "BaseBdev1", 00:14:21.682 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:21.682 "is_configured": true, 00:14:21.682 "data_offset": 2048, 00:14:21.682 "data_size": 63488 00:14:21.682 }, 00:14:21.682 { 00:14:21.682 "name": null, 00:14:21.682 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:21.682 "is_configured": false, 00:14:21.682 "data_offset": 0, 00:14:21.682 "data_size": 63488 00:14:21.682 }, 00:14:21.682 { 00:14:21.682 "name": null, 00:14:21.682 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:21.682 "is_configured": false, 00:14:21.682 "data_offset": 0, 00:14:21.682 "data_size": 63488 00:14:21.682 }, 00:14:21.682 { 00:14:21.682 "name": "BaseBdev4", 00:14:21.682 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:21.682 "is_configured": true, 00:14:21.682 "data_offset": 2048, 00:14:21.682 "data_size": 63488 00:14:21.682 } 00:14:21.682 ] 00:14:21.682 }' 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.682 06:24:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.942 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.942 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.942 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.942 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:21.942 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.203 [2024-11-26 06:24:06.088936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.203 "name": "Existed_Raid", 00:14:22.203 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:22.203 "strip_size_kb": 64, 00:14:22.203 "state": "configuring", 00:14:22.203 "raid_level": "concat", 00:14:22.203 "superblock": true, 00:14:22.203 "num_base_bdevs": 4, 00:14:22.203 "num_base_bdevs_discovered": 3, 00:14:22.203 "num_base_bdevs_operational": 4, 00:14:22.203 "base_bdevs_list": [ 00:14:22.203 { 00:14:22.203 "name": "BaseBdev1", 00:14:22.203 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:22.203 "is_configured": true, 00:14:22.203 "data_offset": 2048, 00:14:22.203 "data_size": 63488 00:14:22.203 }, 00:14:22.203 { 00:14:22.203 "name": null, 00:14:22.203 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:22.203 "is_configured": false, 00:14:22.203 "data_offset": 0, 00:14:22.203 "data_size": 63488 00:14:22.203 }, 00:14:22.203 { 00:14:22.203 "name": "BaseBdev3", 00:14:22.203 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:22.203 "is_configured": true, 00:14:22.203 "data_offset": 2048, 00:14:22.203 "data_size": 63488 00:14:22.203 }, 00:14:22.203 { 00:14:22.203 "name": "BaseBdev4", 00:14:22.203 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:22.203 "is_configured": true, 00:14:22.203 "data_offset": 2048, 00:14:22.203 "data_size": 63488 00:14:22.203 } 00:14:22.203 ] 00:14:22.203 }' 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.203 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 [2024-11-26 06:24:06.660192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.772 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.772 "name": "Existed_Raid", 00:14:22.772 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:22.772 "strip_size_kb": 64, 00:14:22.772 "state": "configuring", 00:14:22.772 "raid_level": "concat", 00:14:22.772 "superblock": true, 00:14:22.772 "num_base_bdevs": 4, 00:14:22.772 "num_base_bdevs_discovered": 2, 00:14:22.772 "num_base_bdevs_operational": 4, 00:14:22.772 "base_bdevs_list": [ 00:14:22.772 { 00:14:22.772 "name": null, 00:14:22.772 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:22.772 "is_configured": false, 00:14:22.772 "data_offset": 0, 00:14:22.772 "data_size": 63488 00:14:22.772 }, 00:14:22.772 { 00:14:22.772 "name": null, 00:14:22.772 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:22.772 "is_configured": false, 00:14:22.772 "data_offset": 0, 00:14:22.772 "data_size": 63488 00:14:22.772 }, 00:14:22.772 { 00:14:22.772 "name": "BaseBdev3", 00:14:22.772 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:22.772 "is_configured": true, 00:14:22.772 "data_offset": 2048, 00:14:22.772 "data_size": 63488 00:14:22.772 }, 00:14:22.772 { 00:14:22.772 "name": "BaseBdev4", 00:14:22.772 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:22.772 "is_configured": true, 00:14:22.772 "data_offset": 2048, 00:14:22.772 "data_size": 63488 00:14:22.772 } 00:14:22.772 ] 00:14:22.772 }' 00:14:22.773 06:24:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.773 06:24:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 [2024-11-26 06:24:07.272977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.342 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.342 "name": "Existed_Raid", 00:14:23.342 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:23.342 "strip_size_kb": 64, 00:14:23.342 "state": "configuring", 00:14:23.342 "raid_level": "concat", 00:14:23.343 "superblock": true, 00:14:23.343 "num_base_bdevs": 4, 00:14:23.343 "num_base_bdevs_discovered": 3, 00:14:23.343 "num_base_bdevs_operational": 4, 00:14:23.343 "base_bdevs_list": [ 00:14:23.343 { 00:14:23.343 "name": null, 00:14:23.343 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:23.343 "is_configured": false, 00:14:23.343 "data_offset": 0, 00:14:23.343 "data_size": 63488 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "name": "BaseBdev2", 00:14:23.343 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:23.343 "is_configured": true, 00:14:23.343 "data_offset": 2048, 00:14:23.343 "data_size": 63488 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "name": "BaseBdev3", 00:14:23.343 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:23.343 "is_configured": true, 00:14:23.343 "data_offset": 2048, 00:14:23.343 "data_size": 63488 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "name": "BaseBdev4", 00:14:23.343 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:23.343 "is_configured": true, 00:14:23.343 "data_offset": 2048, 00:14:23.343 "data_size": 63488 00:14:23.343 } 00:14:23.343 ] 00:14:23.343 }' 00:14:23.343 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.343 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c5f6e0ac-8339-4052-9223-edb3f023eef1 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 [2024-11-26 06:24:07.869272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:23.912 [2024-11-26 06:24:07.869690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:23.912 [2024-11-26 06:24:07.869748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:23.912 [2024-11-26 06:24:07.870142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:23.912 NewBaseBdev 00:14:23.912 [2024-11-26 06:24:07.870374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:23.912 [2024-11-26 06:24:07.870394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:23.912 [2024-11-26 06:24:07.870558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.912 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.913 [ 00:14:23.913 { 00:14:23.913 "name": "NewBaseBdev", 00:14:23.913 "aliases": [ 00:14:23.913 "c5f6e0ac-8339-4052-9223-edb3f023eef1" 00:14:23.913 ], 00:14:23.913 "product_name": "Malloc disk", 00:14:23.913 "block_size": 512, 00:14:23.913 "num_blocks": 65536, 00:14:23.913 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:23.913 "assigned_rate_limits": { 00:14:23.913 "rw_ios_per_sec": 0, 00:14:23.913 "rw_mbytes_per_sec": 0, 00:14:23.913 "r_mbytes_per_sec": 0, 00:14:23.913 "w_mbytes_per_sec": 0 00:14:23.913 }, 00:14:23.913 "claimed": true, 00:14:23.913 "claim_type": "exclusive_write", 00:14:23.913 "zoned": false, 00:14:23.913 "supported_io_types": { 00:14:23.913 "read": true, 00:14:23.913 "write": true, 00:14:23.913 "unmap": true, 00:14:23.913 "flush": true, 00:14:23.913 "reset": true, 00:14:23.913 "nvme_admin": false, 00:14:23.913 "nvme_io": false, 00:14:23.913 "nvme_io_md": false, 00:14:23.913 "write_zeroes": true, 00:14:23.913 "zcopy": true, 00:14:23.913 "get_zone_info": false, 00:14:23.913 "zone_management": false, 00:14:23.913 "zone_append": false, 00:14:23.913 "compare": false, 00:14:23.913 "compare_and_write": false, 00:14:23.913 "abort": true, 00:14:23.913 "seek_hole": false, 00:14:23.913 "seek_data": false, 00:14:23.913 "copy": true, 00:14:23.913 "nvme_iov_md": false 00:14:23.913 }, 00:14:23.913 "memory_domains": [ 00:14:23.913 { 00:14:23.913 "dma_device_id": "system", 00:14:23.913 "dma_device_type": 1 00:14:23.913 }, 00:14:23.913 { 00:14:23.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.913 "dma_device_type": 2 00:14:23.913 } 00:14:23.913 ], 00:14:23.913 "driver_specific": {} 00:14:23.913 } 00:14:23.913 ] 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.913 "name": "Existed_Raid", 00:14:23.913 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:23.913 "strip_size_kb": 64, 00:14:23.913 "state": "online", 00:14:23.913 "raid_level": "concat", 00:14:23.913 "superblock": true, 00:14:23.913 "num_base_bdevs": 4, 00:14:23.913 "num_base_bdevs_discovered": 4, 00:14:23.913 "num_base_bdevs_operational": 4, 00:14:23.913 "base_bdevs_list": [ 00:14:23.913 { 00:14:23.913 "name": "NewBaseBdev", 00:14:23.913 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:23.913 "is_configured": true, 00:14:23.913 "data_offset": 2048, 00:14:23.913 "data_size": 63488 00:14:23.913 }, 00:14:23.913 { 00:14:23.913 "name": "BaseBdev2", 00:14:23.913 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:23.913 "is_configured": true, 00:14:23.913 "data_offset": 2048, 00:14:23.913 "data_size": 63488 00:14:23.913 }, 00:14:23.913 { 00:14:23.913 "name": "BaseBdev3", 00:14:23.913 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:23.913 "is_configured": true, 00:14:23.913 "data_offset": 2048, 00:14:23.913 "data_size": 63488 00:14:23.913 }, 00:14:23.913 { 00:14:23.913 "name": "BaseBdev4", 00:14:23.913 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:23.913 "is_configured": true, 00:14:23.913 "data_offset": 2048, 00:14:23.913 "data_size": 63488 00:14:23.913 } 00:14:23.913 ] 00:14:23.913 }' 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.913 06:24:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.483 [2024-11-26 06:24:08.337033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.483 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.483 "name": "Existed_Raid", 00:14:24.483 "aliases": [ 00:14:24.483 "7f96638b-d7b7-43b2-9605-c98e20290614" 00:14:24.483 ], 00:14:24.483 "product_name": "Raid Volume", 00:14:24.483 "block_size": 512, 00:14:24.483 "num_blocks": 253952, 00:14:24.483 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:24.483 "assigned_rate_limits": { 00:14:24.483 "rw_ios_per_sec": 0, 00:14:24.483 "rw_mbytes_per_sec": 0, 00:14:24.483 "r_mbytes_per_sec": 0, 00:14:24.483 "w_mbytes_per_sec": 0 00:14:24.483 }, 00:14:24.483 "claimed": false, 00:14:24.483 "zoned": false, 00:14:24.483 "supported_io_types": { 00:14:24.483 "read": true, 00:14:24.483 "write": true, 00:14:24.483 "unmap": true, 00:14:24.483 "flush": true, 00:14:24.483 "reset": true, 00:14:24.483 "nvme_admin": false, 00:14:24.483 "nvme_io": false, 00:14:24.483 "nvme_io_md": false, 00:14:24.483 "write_zeroes": true, 00:14:24.483 "zcopy": false, 00:14:24.483 "get_zone_info": false, 00:14:24.483 "zone_management": false, 00:14:24.483 "zone_append": false, 00:14:24.483 "compare": false, 00:14:24.483 "compare_and_write": false, 00:14:24.483 "abort": false, 00:14:24.483 "seek_hole": false, 00:14:24.483 "seek_data": false, 00:14:24.483 "copy": false, 00:14:24.483 "nvme_iov_md": false 00:14:24.483 }, 00:14:24.483 "memory_domains": [ 00:14:24.483 { 00:14:24.483 "dma_device_id": "system", 00:14:24.483 "dma_device_type": 1 00:14:24.483 }, 00:14:24.483 { 00:14:24.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.483 "dma_device_type": 2 00:14:24.483 }, 00:14:24.483 { 00:14:24.483 "dma_device_id": "system", 00:14:24.483 "dma_device_type": 1 00:14:24.483 }, 00:14:24.483 { 00:14:24.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.483 "dma_device_type": 2 00:14:24.483 }, 00:14:24.483 { 00:14:24.483 "dma_device_id": "system", 00:14:24.483 "dma_device_type": 1 00:14:24.483 }, 00:14:24.483 { 00:14:24.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.483 "dma_device_type": 2 00:14:24.483 }, 00:14:24.484 { 00:14:24.484 "dma_device_id": "system", 00:14:24.484 "dma_device_type": 1 00:14:24.484 }, 00:14:24.484 { 00:14:24.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.484 "dma_device_type": 2 00:14:24.484 } 00:14:24.484 ], 00:14:24.484 "driver_specific": { 00:14:24.484 "raid": { 00:14:24.484 "uuid": "7f96638b-d7b7-43b2-9605-c98e20290614", 00:14:24.484 "strip_size_kb": 64, 00:14:24.484 "state": "online", 00:14:24.484 "raid_level": "concat", 00:14:24.484 "superblock": true, 00:14:24.484 "num_base_bdevs": 4, 00:14:24.484 "num_base_bdevs_discovered": 4, 00:14:24.484 "num_base_bdevs_operational": 4, 00:14:24.484 "base_bdevs_list": [ 00:14:24.484 { 00:14:24.484 "name": "NewBaseBdev", 00:14:24.484 "uuid": "c5f6e0ac-8339-4052-9223-edb3f023eef1", 00:14:24.484 "is_configured": true, 00:14:24.484 "data_offset": 2048, 00:14:24.484 "data_size": 63488 00:14:24.484 }, 00:14:24.484 { 00:14:24.484 "name": "BaseBdev2", 00:14:24.484 "uuid": "6828f6d8-ee43-40f3-bcde-f300580cfdd7", 00:14:24.484 "is_configured": true, 00:14:24.484 "data_offset": 2048, 00:14:24.484 "data_size": 63488 00:14:24.484 }, 00:14:24.484 { 00:14:24.484 "name": "BaseBdev3", 00:14:24.484 "uuid": "7587235d-e7a2-4198-b0bf-a597e6c92f03", 00:14:24.484 "is_configured": true, 00:14:24.484 "data_offset": 2048, 00:14:24.484 "data_size": 63488 00:14:24.484 }, 00:14:24.484 { 00:14:24.484 "name": "BaseBdev4", 00:14:24.484 "uuid": "8f3916ad-8275-4702-b5a8-6aefb9ec3a0a", 00:14:24.484 "is_configured": true, 00:14:24.484 "data_offset": 2048, 00:14:24.484 "data_size": 63488 00:14:24.484 } 00:14:24.484 ] 00:14:24.484 } 00:14:24.484 } 00:14:24.484 }' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:24.484 BaseBdev2 00:14:24.484 BaseBdev3 00:14:24.484 BaseBdev4' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.484 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.743 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.744 [2024-11-26 06:24:08.644344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.744 [2024-11-26 06:24:08.644400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.744 [2024-11-26 06:24:08.644510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.744 [2024-11-26 06:24:08.644590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.744 [2024-11-26 06:24:08.644603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72443 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72443 ']' 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72443 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72443 00:14:24.744 killing process with pid 72443 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72443' 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72443 00:14:24.744 [2024-11-26 06:24:08.692251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.744 06:24:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72443 00:14:25.312 [2024-11-26 06:24:09.181035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.695 06:24:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:26.695 00:14:26.695 real 0m12.542s 00:14:26.695 user 0m19.619s 00:14:26.695 sys 0m2.299s 00:14:26.695 ************************************ 00:14:26.695 END TEST raid_state_function_test_sb 00:14:26.695 ************************************ 00:14:26.695 06:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.695 06:24:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.695 06:24:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:26.695 06:24:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:26.695 06:24:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.695 06:24:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.695 ************************************ 00:14:26.695 START TEST raid_superblock_test 00:14:26.695 ************************************ 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73126 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73126 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73126 ']' 00:14:26.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.695 06:24:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.695 [2024-11-26 06:24:10.706119] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:26.695 [2024-11-26 06:24:10.706292] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73126 ] 00:14:26.955 [2024-11-26 06:24:10.873503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.955 [2024-11-26 06:24:11.040095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.214 [2024-11-26 06:24:11.326029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.214 [2024-11-26 06:24:11.326137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.781 malloc1 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.781 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 [2024-11-26 06:24:11.739626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.782 [2024-11-26 06:24:11.739766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.782 [2024-11-26 06:24:11.739852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:27.782 [2024-11-26 06:24:11.739894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.782 [2024-11-26 06:24:11.742864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.782 [2024-11-26 06:24:11.742970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.782 pt1 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 malloc2 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 [2024-11-26 06:24:11.813957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:27.782 [2024-11-26 06:24:11.814110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.782 [2024-11-26 06:24:11.814147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:27.782 [2024-11-26 06:24:11.814160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.782 [2024-11-26 06:24:11.817012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.782 [2024-11-26 06:24:11.817068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:27.782 pt2 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 malloc3 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.782 [2024-11-26 06:24:11.897077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:27.782 [2024-11-26 06:24:11.897200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.782 [2024-11-26 06:24:11.897271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:27.782 [2024-11-26 06:24:11.897310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.782 [2024-11-26 06:24:11.900221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.782 [2024-11-26 06:24:11.900303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:27.782 pt3 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.782 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.041 malloc4 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.041 [2024-11-26 06:24:11.972088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:28.041 [2024-11-26 06:24:11.972231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.041 [2024-11-26 06:24:11.972279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:28.041 [2024-11-26 06:24:11.972315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.041 [2024-11-26 06:24:11.975237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.041 [2024-11-26 06:24:11.975325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:28.041 pt4 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.041 [2024-11-26 06:24:11.988282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:28.041 [2024-11-26 06:24:11.990827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:28.041 [2024-11-26 06:24:11.990966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:28.041 [2024-11-26 06:24:11.991115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:28.041 [2024-11-26 06:24:11.991426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:28.041 [2024-11-26 06:24:11.991478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:28.041 [2024-11-26 06:24:11.991890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:28.041 [2024-11-26 06:24:11.992206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:28.041 [2024-11-26 06:24:11.992262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:28.041 [2024-11-26 06:24:11.992592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.041 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.042 06:24:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.042 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.042 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.042 "name": "raid_bdev1", 00:14:28.042 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:28.042 "strip_size_kb": 64, 00:14:28.042 "state": "online", 00:14:28.042 "raid_level": "concat", 00:14:28.042 "superblock": true, 00:14:28.042 "num_base_bdevs": 4, 00:14:28.042 "num_base_bdevs_discovered": 4, 00:14:28.042 "num_base_bdevs_operational": 4, 00:14:28.042 "base_bdevs_list": [ 00:14:28.042 { 00:14:28.042 "name": "pt1", 00:14:28.042 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 2048, 00:14:28.042 "data_size": 63488 00:14:28.042 }, 00:14:28.042 { 00:14:28.042 "name": "pt2", 00:14:28.042 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 2048, 00:14:28.042 "data_size": 63488 00:14:28.042 }, 00:14:28.042 { 00:14:28.042 "name": "pt3", 00:14:28.042 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 2048, 00:14:28.042 "data_size": 63488 00:14:28.042 }, 00:14:28.042 { 00:14:28.042 "name": "pt4", 00:14:28.042 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 2048, 00:14:28.042 "data_size": 63488 00:14:28.042 } 00:14:28.042 ] 00:14:28.042 }' 00:14:28.042 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.042 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.611 [2024-11-26 06:24:12.500392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.611 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.611 "name": "raid_bdev1", 00:14:28.611 "aliases": [ 00:14:28.611 "05b05831-72ac-4761-9cd9-0de77b272663" 00:14:28.611 ], 00:14:28.611 "product_name": "Raid Volume", 00:14:28.611 "block_size": 512, 00:14:28.611 "num_blocks": 253952, 00:14:28.611 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:28.611 "assigned_rate_limits": { 00:14:28.611 "rw_ios_per_sec": 0, 00:14:28.611 "rw_mbytes_per_sec": 0, 00:14:28.611 "r_mbytes_per_sec": 0, 00:14:28.612 "w_mbytes_per_sec": 0 00:14:28.612 }, 00:14:28.612 "claimed": false, 00:14:28.612 "zoned": false, 00:14:28.612 "supported_io_types": { 00:14:28.612 "read": true, 00:14:28.612 "write": true, 00:14:28.612 "unmap": true, 00:14:28.612 "flush": true, 00:14:28.612 "reset": true, 00:14:28.612 "nvme_admin": false, 00:14:28.612 "nvme_io": false, 00:14:28.612 "nvme_io_md": false, 00:14:28.612 "write_zeroes": true, 00:14:28.612 "zcopy": false, 00:14:28.612 "get_zone_info": false, 00:14:28.612 "zone_management": false, 00:14:28.612 "zone_append": false, 00:14:28.612 "compare": false, 00:14:28.612 "compare_and_write": false, 00:14:28.612 "abort": false, 00:14:28.612 "seek_hole": false, 00:14:28.612 "seek_data": false, 00:14:28.612 "copy": false, 00:14:28.612 "nvme_iov_md": false 00:14:28.612 }, 00:14:28.612 "memory_domains": [ 00:14:28.612 { 00:14:28.612 "dma_device_id": "system", 00:14:28.612 "dma_device_type": 1 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.612 "dma_device_type": 2 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "system", 00:14:28.612 "dma_device_type": 1 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.612 "dma_device_type": 2 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "system", 00:14:28.612 "dma_device_type": 1 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.612 "dma_device_type": 2 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "system", 00:14:28.612 "dma_device_type": 1 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.612 "dma_device_type": 2 00:14:28.612 } 00:14:28.612 ], 00:14:28.612 "driver_specific": { 00:14:28.612 "raid": { 00:14:28.612 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:28.612 "strip_size_kb": 64, 00:14:28.612 "state": "online", 00:14:28.612 "raid_level": "concat", 00:14:28.612 "superblock": true, 00:14:28.612 "num_base_bdevs": 4, 00:14:28.612 "num_base_bdevs_discovered": 4, 00:14:28.612 "num_base_bdevs_operational": 4, 00:14:28.612 "base_bdevs_list": [ 00:14:28.612 { 00:14:28.612 "name": "pt1", 00:14:28.612 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:28.612 "is_configured": true, 00:14:28.612 "data_offset": 2048, 00:14:28.612 "data_size": 63488 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "name": "pt2", 00:14:28.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:28.612 "is_configured": true, 00:14:28.612 "data_offset": 2048, 00:14:28.612 "data_size": 63488 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "name": "pt3", 00:14:28.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:28.612 "is_configured": true, 00:14:28.612 "data_offset": 2048, 00:14:28.612 "data_size": 63488 00:14:28.612 }, 00:14:28.612 { 00:14:28.612 "name": "pt4", 00:14:28.612 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:28.612 "is_configured": true, 00:14:28.612 "data_offset": 2048, 00:14:28.612 "data_size": 63488 00:14:28.612 } 00:14:28.612 ] 00:14:28.612 } 00:14:28.612 } 00:14:28.612 }' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:28.612 pt2 00:14:28.612 pt3 00:14:28.612 pt4' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.612 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:28.878 [2024-11-26 06:24:12.867706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=05b05831-72ac-4761-9cd9-0de77b272663 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 05b05831-72ac-4761-9cd9-0de77b272663 ']' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 [2024-11-26 06:24:12.919282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.878 [2024-11-26 06:24:12.919377] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.878 [2024-11-26 06:24:12.919556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.878 [2024-11-26 06:24:12.919692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.878 [2024-11-26 06:24:12.919757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.878 06:24:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:28.879 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.879 06:24:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.879 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.879 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.879 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:28.879 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.139 [2024-11-26 06:24:13.071032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:29.139 [2024-11-26 06:24:13.073672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:29.139 [2024-11-26 06:24:13.073783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:29.139 [2024-11-26 06:24:13.073848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:29.139 [2024-11-26 06:24:13.073969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:29.139 [2024-11-26 06:24:13.074135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:29.139 [2024-11-26 06:24:13.074212] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:29.139 [2024-11-26 06:24:13.074306] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:29.139 [2024-11-26 06:24:13.074369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.139 [2024-11-26 06:24:13.074417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:29.139 request: 00:14:29.139 { 00:14:29.139 "name": "raid_bdev1", 00:14:29.139 "raid_level": "concat", 00:14:29.139 "base_bdevs": [ 00:14:29.139 "malloc1", 00:14:29.139 "malloc2", 00:14:29.139 "malloc3", 00:14:29.139 "malloc4" 00:14:29.139 ], 00:14:29.139 "strip_size_kb": 64, 00:14:29.139 "superblock": false, 00:14:29.139 "method": "bdev_raid_create", 00:14:29.139 "req_id": 1 00:14:29.139 } 00:14:29.139 Got JSON-RPC error response 00:14:29.139 response: 00:14:29.139 { 00:14:29.139 "code": -17, 00:14:29.139 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:29.139 } 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:29.139 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.140 [2024-11-26 06:24:13.158917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.140 [2024-11-26 06:24:13.159087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.140 [2024-11-26 06:24:13.159159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:29.140 [2024-11-26 06:24:13.159203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.140 [2024-11-26 06:24:13.162185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.140 [2024-11-26 06:24:13.162297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.140 [2024-11-26 06:24:13.162455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:29.140 [2024-11-26 06:24:13.162589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.140 pt1 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.140 "name": "raid_bdev1", 00:14:29.140 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:29.140 "strip_size_kb": 64, 00:14:29.140 "state": "configuring", 00:14:29.140 "raid_level": "concat", 00:14:29.140 "superblock": true, 00:14:29.140 "num_base_bdevs": 4, 00:14:29.140 "num_base_bdevs_discovered": 1, 00:14:29.140 "num_base_bdevs_operational": 4, 00:14:29.140 "base_bdevs_list": [ 00:14:29.140 { 00:14:29.140 "name": "pt1", 00:14:29.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.140 "is_configured": true, 00:14:29.140 "data_offset": 2048, 00:14:29.140 "data_size": 63488 00:14:29.140 }, 00:14:29.140 { 00:14:29.140 "name": null, 00:14:29.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.140 "is_configured": false, 00:14:29.140 "data_offset": 2048, 00:14:29.140 "data_size": 63488 00:14:29.140 }, 00:14:29.140 { 00:14:29.140 "name": null, 00:14:29.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.140 "is_configured": false, 00:14:29.140 "data_offset": 2048, 00:14:29.140 "data_size": 63488 00:14:29.140 }, 00:14:29.140 { 00:14:29.140 "name": null, 00:14:29.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.140 "is_configured": false, 00:14:29.140 "data_offset": 2048, 00:14:29.140 "data_size": 63488 00:14:29.140 } 00:14:29.140 ] 00:14:29.140 }' 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.140 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.710 [2024-11-26 06:24:13.690088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.710 [2024-11-26 06:24:13.690250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.710 [2024-11-26 06:24:13.690322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:29.710 [2024-11-26 06:24:13.690370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.710 [2024-11-26 06:24:13.691010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.710 [2024-11-26 06:24:13.691096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.710 [2024-11-26 06:24:13.691257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:29.710 [2024-11-26 06:24:13.691332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.710 pt2 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.710 [2024-11-26 06:24:13.702088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.710 "name": "raid_bdev1", 00:14:29.710 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:29.710 "strip_size_kb": 64, 00:14:29.710 "state": "configuring", 00:14:29.710 "raid_level": "concat", 00:14:29.710 "superblock": true, 00:14:29.710 "num_base_bdevs": 4, 00:14:29.710 "num_base_bdevs_discovered": 1, 00:14:29.710 "num_base_bdevs_operational": 4, 00:14:29.710 "base_bdevs_list": [ 00:14:29.710 { 00:14:29.710 "name": "pt1", 00:14:29.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.710 "is_configured": true, 00:14:29.710 "data_offset": 2048, 00:14:29.710 "data_size": 63488 00:14:29.710 }, 00:14:29.710 { 00:14:29.710 "name": null, 00:14:29.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.710 "is_configured": false, 00:14:29.710 "data_offset": 0, 00:14:29.710 "data_size": 63488 00:14:29.710 }, 00:14:29.710 { 00:14:29.710 "name": null, 00:14:29.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.710 "is_configured": false, 00:14:29.710 "data_offset": 2048, 00:14:29.710 "data_size": 63488 00:14:29.710 }, 00:14:29.710 { 00:14:29.710 "name": null, 00:14:29.710 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:29.710 "is_configured": false, 00:14:29.710 "data_offset": 2048, 00:14:29.710 "data_size": 63488 00:14:29.710 } 00:14:29.710 ] 00:14:29.710 }' 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.710 06:24:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.280 [2024-11-26 06:24:14.209251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.280 [2024-11-26 06:24:14.209410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.280 [2024-11-26 06:24:14.209473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:30.280 [2024-11-26 06:24:14.209518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.280 [2024-11-26 06:24:14.210188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.280 [2024-11-26 06:24:14.210265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.280 [2024-11-26 06:24:14.210424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.280 [2024-11-26 06:24:14.210491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.280 pt2 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:30.280 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.281 [2024-11-26 06:24:14.221236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:30.281 [2024-11-26 06:24:14.221383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.281 [2024-11-26 06:24:14.221461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:30.281 [2024-11-26 06:24:14.221504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.281 [2024-11-26 06:24:14.222168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.281 [2024-11-26 06:24:14.222244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:30.281 [2024-11-26 06:24:14.222411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:30.281 [2024-11-26 06:24:14.222475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:30.281 pt3 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.281 [2024-11-26 06:24:14.233209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:30.281 [2024-11-26 06:24:14.233421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.281 [2024-11-26 06:24:14.233501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:30.281 [2024-11-26 06:24:14.233541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.281 [2024-11-26 06:24:14.234234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.281 [2024-11-26 06:24:14.234314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:30.281 [2024-11-26 06:24:14.234498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:30.281 [2024-11-26 06:24:14.234564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:30.281 [2024-11-26 06:24:14.234815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:30.281 [2024-11-26 06:24:14.234860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:30.281 [2024-11-26 06:24:14.235283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:30.281 [2024-11-26 06:24:14.235550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:30.281 [2024-11-26 06:24:14.235605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:30.281 [2024-11-26 06:24:14.235860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.281 pt4 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.281 "name": "raid_bdev1", 00:14:30.281 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:30.281 "strip_size_kb": 64, 00:14:30.281 "state": "online", 00:14:30.281 "raid_level": "concat", 00:14:30.281 "superblock": true, 00:14:30.281 "num_base_bdevs": 4, 00:14:30.281 "num_base_bdevs_discovered": 4, 00:14:30.281 "num_base_bdevs_operational": 4, 00:14:30.281 "base_bdevs_list": [ 00:14:30.281 { 00:14:30.281 "name": "pt1", 00:14:30.281 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.281 "is_configured": true, 00:14:30.281 "data_offset": 2048, 00:14:30.281 "data_size": 63488 00:14:30.281 }, 00:14:30.281 { 00:14:30.281 "name": "pt2", 00:14:30.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.281 "is_configured": true, 00:14:30.281 "data_offset": 2048, 00:14:30.281 "data_size": 63488 00:14:30.281 }, 00:14:30.281 { 00:14:30.281 "name": "pt3", 00:14:30.281 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.281 "is_configured": true, 00:14:30.281 "data_offset": 2048, 00:14:30.281 "data_size": 63488 00:14:30.281 }, 00:14:30.281 { 00:14:30.281 "name": "pt4", 00:14:30.281 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.281 "is_configured": true, 00:14:30.281 "data_offset": 2048, 00:14:30.281 "data_size": 63488 00:14:30.281 } 00:14:30.281 ] 00:14:30.281 }' 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.281 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.850 [2024-11-26 06:24:14.736819] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.850 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.850 "name": "raid_bdev1", 00:14:30.850 "aliases": [ 00:14:30.850 "05b05831-72ac-4761-9cd9-0de77b272663" 00:14:30.850 ], 00:14:30.850 "product_name": "Raid Volume", 00:14:30.850 "block_size": 512, 00:14:30.850 "num_blocks": 253952, 00:14:30.850 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:30.850 "assigned_rate_limits": { 00:14:30.850 "rw_ios_per_sec": 0, 00:14:30.850 "rw_mbytes_per_sec": 0, 00:14:30.850 "r_mbytes_per_sec": 0, 00:14:30.850 "w_mbytes_per_sec": 0 00:14:30.850 }, 00:14:30.850 "claimed": false, 00:14:30.850 "zoned": false, 00:14:30.850 "supported_io_types": { 00:14:30.850 "read": true, 00:14:30.850 "write": true, 00:14:30.850 "unmap": true, 00:14:30.850 "flush": true, 00:14:30.850 "reset": true, 00:14:30.850 "nvme_admin": false, 00:14:30.850 "nvme_io": false, 00:14:30.850 "nvme_io_md": false, 00:14:30.850 "write_zeroes": true, 00:14:30.850 "zcopy": false, 00:14:30.850 "get_zone_info": false, 00:14:30.850 "zone_management": false, 00:14:30.850 "zone_append": false, 00:14:30.850 "compare": false, 00:14:30.850 "compare_and_write": false, 00:14:30.850 "abort": false, 00:14:30.850 "seek_hole": false, 00:14:30.850 "seek_data": false, 00:14:30.850 "copy": false, 00:14:30.850 "nvme_iov_md": false 00:14:30.850 }, 00:14:30.850 "memory_domains": [ 00:14:30.850 { 00:14:30.850 "dma_device_id": "system", 00:14:30.850 "dma_device_type": 1 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.850 "dma_device_type": 2 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "system", 00:14:30.850 "dma_device_type": 1 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.850 "dma_device_type": 2 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "system", 00:14:30.850 "dma_device_type": 1 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.850 "dma_device_type": 2 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "system", 00:14:30.850 "dma_device_type": 1 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.850 "dma_device_type": 2 00:14:30.850 } 00:14:30.850 ], 00:14:30.850 "driver_specific": { 00:14:30.850 "raid": { 00:14:30.850 "uuid": "05b05831-72ac-4761-9cd9-0de77b272663", 00:14:30.850 "strip_size_kb": 64, 00:14:30.850 "state": "online", 00:14:30.850 "raid_level": "concat", 00:14:30.850 "superblock": true, 00:14:30.850 "num_base_bdevs": 4, 00:14:30.850 "num_base_bdevs_discovered": 4, 00:14:30.850 "num_base_bdevs_operational": 4, 00:14:30.850 "base_bdevs_list": [ 00:14:30.850 { 00:14:30.850 "name": "pt1", 00:14:30.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.850 "is_configured": true, 00:14:30.850 "data_offset": 2048, 00:14:30.850 "data_size": 63488 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "name": "pt2", 00:14:30.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.850 "is_configured": true, 00:14:30.850 "data_offset": 2048, 00:14:30.850 "data_size": 63488 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "name": "pt3", 00:14:30.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.850 "is_configured": true, 00:14:30.850 "data_offset": 2048, 00:14:30.850 "data_size": 63488 00:14:30.850 }, 00:14:30.850 { 00:14:30.850 "name": "pt4", 00:14:30.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:30.850 "is_configured": true, 00:14:30.850 "data_offset": 2048, 00:14:30.850 "data_size": 63488 00:14:30.850 } 00:14:30.850 ] 00:14:30.851 } 00:14:30.851 } 00:14:30.851 }' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:30.851 pt2 00:14:30.851 pt3 00:14:30.851 pt4' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.851 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.110 06:24:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.110 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.111 [2024-11-26 06:24:15.080337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 05b05831-72ac-4761-9cd9-0de77b272663 '!=' 05b05831-72ac-4761-9cd9-0de77b272663 ']' 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73126 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73126 ']' 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73126 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73126 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73126' 00:14:31.111 killing process with pid 73126 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73126 00:14:31.111 [2024-11-26 06:24:15.152508] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.111 06:24:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73126 00:14:31.111 [2024-11-26 06:24:15.152713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.111 [2024-11-26 06:24:15.152828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.111 [2024-11-26 06:24:15.152901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:31.678 [2024-11-26 06:24:15.654514] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.057 06:24:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:33.057 00:14:33.057 real 0m6.453s 00:14:33.057 user 0m9.012s 00:14:33.057 sys 0m1.224s 00:14:33.057 06:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.057 06:24:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.057 ************************************ 00:14:33.057 END TEST raid_superblock_test 00:14:33.057 ************************************ 00:14:33.057 06:24:17 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:14:33.057 06:24:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:33.057 06:24:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.057 06:24:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.057 ************************************ 00:14:33.057 START TEST raid_read_error_test 00:14:33.057 ************************************ 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uvWZq4otqU 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73396 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73396 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73396 ']' 00:14:33.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.057 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.058 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.058 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.058 06:24:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.316 [2024-11-26 06:24:17.251360] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:33.316 [2024-11-26 06:24:17.251515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73396 ] 00:14:33.316 [2024-11-26 06:24:17.434823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.574 [2024-11-26 06:24:17.590390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.833 [2024-11-26 06:24:17.864136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.833 [2024-11-26 06:24:17.864200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.092 BaseBdev1_malloc 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.092 true 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.092 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 [2024-11-26 06:24:18.229635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:34.352 [2024-11-26 06:24:18.229762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.352 [2024-11-26 06:24:18.229834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:34.352 [2024-11-26 06:24:18.229881] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.352 [2024-11-26 06:24:18.233155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.352 [2024-11-26 06:24:18.233279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.352 BaseBdev1 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 BaseBdev2_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 true 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 [2024-11-26 06:24:18.310554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:34.352 [2024-11-26 06:24:18.310720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.352 [2024-11-26 06:24:18.310770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:34.352 [2024-11-26 06:24:18.310812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.352 [2024-11-26 06:24:18.313719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.352 [2024-11-26 06:24:18.313809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.352 BaseBdev2 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 BaseBdev3_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 true 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 [2024-11-26 06:24:18.402738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:34.352 [2024-11-26 06:24:18.402867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.352 [2024-11-26 06:24:18.402931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:34.352 [2024-11-26 06:24:18.402981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.352 [2024-11-26 06:24:18.405921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.352 [2024-11-26 06:24:18.406017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:34.352 BaseBdev3 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 BaseBdev4_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.352 true 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.352 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:34.353 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.353 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.353 [2024-11-26 06:24:18.482312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:34.353 [2024-11-26 06:24:18.482440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.353 [2024-11-26 06:24:18.482482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:34.353 [2024-11-26 06:24:18.482520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.612 [2024-11-26 06:24:18.485221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.612 [2024-11-26 06:24:18.485327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:34.612 BaseBdev4 00:14:34.612 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.612 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:34.612 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.612 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.612 [2024-11-26 06:24:18.494371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.612 [2024-11-26 06:24:18.496771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.612 [2024-11-26 06:24:18.496907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.612 [2024-11-26 06:24:18.497035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.612 [2024-11-26 06:24:18.497374] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:34.612 [2024-11-26 06:24:18.497406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:34.612 [2024-11-26 06:24:18.497665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:34.612 [2024-11-26 06:24:18.497843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:34.612 [2024-11-26 06:24:18.497855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:34.612 [2024-11-26 06:24:18.498018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.612 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.612 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.613 "name": "raid_bdev1", 00:14:34.613 "uuid": "c15e4198-fe27-49c4-89fd-a99e79016ade", 00:14:34.613 "strip_size_kb": 64, 00:14:34.613 "state": "online", 00:14:34.613 "raid_level": "concat", 00:14:34.613 "superblock": true, 00:14:34.613 "num_base_bdevs": 4, 00:14:34.613 "num_base_bdevs_discovered": 4, 00:14:34.613 "num_base_bdevs_operational": 4, 00:14:34.613 "base_bdevs_list": [ 00:14:34.613 { 00:14:34.613 "name": "BaseBdev1", 00:14:34.613 "uuid": "8032dbea-3a02-50fb-a5f8-592899cc21cd", 00:14:34.613 "is_configured": true, 00:14:34.613 "data_offset": 2048, 00:14:34.613 "data_size": 63488 00:14:34.613 }, 00:14:34.613 { 00:14:34.613 "name": "BaseBdev2", 00:14:34.613 "uuid": "8894abee-a399-51b2-8dd5-81e353942b60", 00:14:34.613 "is_configured": true, 00:14:34.613 "data_offset": 2048, 00:14:34.613 "data_size": 63488 00:14:34.613 }, 00:14:34.613 { 00:14:34.613 "name": "BaseBdev3", 00:14:34.613 "uuid": "8831e288-a023-5d40-97ea-fcd02bd5e3bd", 00:14:34.613 "is_configured": true, 00:14:34.613 "data_offset": 2048, 00:14:34.613 "data_size": 63488 00:14:34.613 }, 00:14:34.613 { 00:14:34.613 "name": "BaseBdev4", 00:14:34.613 "uuid": "e91d8653-bd2b-5da9-8191-96199e0fa56f", 00:14:34.613 "is_configured": true, 00:14:34.613 "data_offset": 2048, 00:14:34.613 "data_size": 63488 00:14:34.613 } 00:14:34.613 ] 00:14:34.613 }' 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.613 06:24:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.872 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:34.872 06:24:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.211 [2024-11-26 06:24:19.094830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.150 06:24:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.150 06:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.150 "name": "raid_bdev1", 00:14:36.150 "uuid": "c15e4198-fe27-49c4-89fd-a99e79016ade", 00:14:36.150 "strip_size_kb": 64, 00:14:36.150 "state": "online", 00:14:36.150 "raid_level": "concat", 00:14:36.150 "superblock": true, 00:14:36.150 "num_base_bdevs": 4, 00:14:36.150 "num_base_bdevs_discovered": 4, 00:14:36.150 "num_base_bdevs_operational": 4, 00:14:36.150 "base_bdevs_list": [ 00:14:36.150 { 00:14:36.150 "name": "BaseBdev1", 00:14:36.150 "uuid": "8032dbea-3a02-50fb-a5f8-592899cc21cd", 00:14:36.150 "is_configured": true, 00:14:36.150 "data_offset": 2048, 00:14:36.150 "data_size": 63488 00:14:36.150 }, 00:14:36.150 { 00:14:36.150 "name": "BaseBdev2", 00:14:36.150 "uuid": "8894abee-a399-51b2-8dd5-81e353942b60", 00:14:36.150 "is_configured": true, 00:14:36.150 "data_offset": 2048, 00:14:36.150 "data_size": 63488 00:14:36.150 }, 00:14:36.150 { 00:14:36.150 "name": "BaseBdev3", 00:14:36.150 "uuid": "8831e288-a023-5d40-97ea-fcd02bd5e3bd", 00:14:36.150 "is_configured": true, 00:14:36.150 "data_offset": 2048, 00:14:36.150 "data_size": 63488 00:14:36.150 }, 00:14:36.150 { 00:14:36.150 "name": "BaseBdev4", 00:14:36.150 "uuid": "e91d8653-bd2b-5da9-8191-96199e0fa56f", 00:14:36.150 "is_configured": true, 00:14:36.150 "data_offset": 2048, 00:14:36.150 "data_size": 63488 00:14:36.150 } 00:14:36.150 ] 00:14:36.150 }' 00:14:36.150 06:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.150 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.409 [2024-11-26 06:24:20.449218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.409 [2024-11-26 06:24:20.449310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.409 [2024-11-26 06:24:20.452551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.409 [2024-11-26 06:24:20.452687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.409 { 00:14:36.409 "results": [ 00:14:36.409 { 00:14:36.409 "job": "raid_bdev1", 00:14:36.409 "core_mask": "0x1", 00:14:36.409 "workload": "randrw", 00:14:36.409 "percentage": 50, 00:14:36.409 "status": "finished", 00:14:36.409 "queue_depth": 1, 00:14:36.409 "io_size": 131072, 00:14:36.409 "runtime": 1.354587, 00:14:36.409 "iops": 11754.874363920517, 00:14:36.409 "mibps": 1469.3592954900646, 00:14:36.409 "io_failed": 1, 00:14:36.409 "io_timeout": 0, 00:14:36.409 "avg_latency_us": 119.57601050404266, 00:14:36.409 "min_latency_us": 27.83580786026201, 00:14:36.409 "max_latency_us": 1681.3275109170306 00:14:36.409 } 00:14:36.409 ], 00:14:36.409 "core_count": 1 00:14:36.409 } 00:14:36.409 [2024-11-26 06:24:20.452789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.409 [2024-11-26 06:24:20.452814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73396 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73396 ']' 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73396 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73396 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.409 killing process with pid 73396 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73396' 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73396 00:14:36.409 [2024-11-26 06:24:20.501795] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.409 06:24:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73396 00:14:36.977 [2024-11-26 06:24:20.897908] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uvWZq4otqU 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:38.358 ************************************ 00:14:38.358 END TEST raid_read_error_test 00:14:38.358 ************************************ 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:38.358 06:24:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:14:38.358 00:14:38.358 real 0m5.053s 00:14:38.358 user 0m5.858s 00:14:38.358 sys 0m0.772s 00:14:38.359 06:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.359 06:24:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.359 06:24:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:38.359 06:24:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:38.359 06:24:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.359 06:24:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.359 ************************************ 00:14:38.359 START TEST raid_write_error_test 00:14:38.359 ************************************ 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k1pxZ9OG2X 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73542 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73542 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73542 ']' 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.359 06:24:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.359 [2024-11-26 06:24:22.382895] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:38.359 [2024-11-26 06:24:22.383112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73542 ] 00:14:38.617 [2024-11-26 06:24:22.569638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.617 [2024-11-26 06:24:22.692751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.876 [2024-11-26 06:24:22.902786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.876 [2024-11-26 06:24:22.902852] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.139 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.139 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:39.139 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.139 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:39.139 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.139 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 BaseBdev1_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 true 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 [2024-11-26 06:24:23.293214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:39.408 [2024-11-26 06:24:23.293298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.408 [2024-11-26 06:24:23.293325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:39.408 [2024-11-26 06:24:23.293339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.408 [2024-11-26 06:24:23.295770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.408 [2024-11-26 06:24:23.295822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:39.408 BaseBdev1 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 BaseBdev2_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 true 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 [2024-11-26 06:24:23.354524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.408 [2024-11-26 06:24:23.354688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.408 [2024-11-26 06:24:23.354729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:39.408 [2024-11-26 06:24:23.354741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.408 [2024-11-26 06:24:23.357124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.408 [2024-11-26 06:24:23.357174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.408 BaseBdev2 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 BaseBdev3_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 true 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 [2024-11-26 06:24:23.425470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:39.408 [2024-11-26 06:24:23.425624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.408 [2024-11-26 06:24:23.425661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:39.408 [2024-11-26 06:24:23.425674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.408 [2024-11-26 06:24:23.427824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.408 [2024-11-26 06:24:23.427869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:39.408 BaseBdev3 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 BaseBdev4_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 true 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.408 [2024-11-26 06:24:23.482092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:39.408 [2024-11-26 06:24:23.482167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.408 [2024-11-26 06:24:23.482186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:39.408 [2024-11-26 06:24:23.482198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.408 [2024-11-26 06:24:23.484330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.408 [2024-11-26 06:24:23.484375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:39.408 BaseBdev4 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.408 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.409 [2024-11-26 06:24:23.490175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.409 [2024-11-26 06:24:23.492135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.409 [2024-11-26 06:24:23.492267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.409 [2024-11-26 06:24:23.492385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.409 [2024-11-26 06:24:23.492716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:39.409 [2024-11-26 06:24:23.492773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:39.409 [2024-11-26 06:24:23.493197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:39.409 [2024-11-26 06:24:23.493478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:39.409 [2024-11-26 06:24:23.493500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:39.409 [2024-11-26 06:24:23.493688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.409 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.668 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.668 "name": "raid_bdev1", 00:14:39.668 "uuid": "92735871-f69e-4eec-aad9-c68670aad6f4", 00:14:39.668 "strip_size_kb": 64, 00:14:39.668 "state": "online", 00:14:39.668 "raid_level": "concat", 00:14:39.668 "superblock": true, 00:14:39.668 "num_base_bdevs": 4, 00:14:39.668 "num_base_bdevs_discovered": 4, 00:14:39.668 "num_base_bdevs_operational": 4, 00:14:39.668 "base_bdevs_list": [ 00:14:39.668 { 00:14:39.668 "name": "BaseBdev1", 00:14:39.668 "uuid": "98880b3d-60c3-59e3-beea-7c0eaf027b40", 00:14:39.668 "is_configured": true, 00:14:39.668 "data_offset": 2048, 00:14:39.668 "data_size": 63488 00:14:39.668 }, 00:14:39.668 { 00:14:39.668 "name": "BaseBdev2", 00:14:39.668 "uuid": "0b9b84cb-c3e1-55f6-9af3-be87a37ab591", 00:14:39.668 "is_configured": true, 00:14:39.668 "data_offset": 2048, 00:14:39.668 "data_size": 63488 00:14:39.668 }, 00:14:39.668 { 00:14:39.668 "name": "BaseBdev3", 00:14:39.668 "uuid": "f143e15a-89c0-52a3-b329-e99af73da49b", 00:14:39.668 "is_configured": true, 00:14:39.668 "data_offset": 2048, 00:14:39.668 "data_size": 63488 00:14:39.668 }, 00:14:39.668 { 00:14:39.668 "name": "BaseBdev4", 00:14:39.668 "uuid": "83f04a3e-2106-510a-9b14-5abfb023b2c8", 00:14:39.668 "is_configured": true, 00:14:39.668 "data_offset": 2048, 00:14:39.668 "data_size": 63488 00:14:39.668 } 00:14:39.668 ] 00:14:39.668 }' 00:14:39.668 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.668 06:24:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.927 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:39.927 06:24:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:40.185 [2024-11-26 06:24:24.086701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:41.120 06:24:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:41.120 06:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.121 06:24:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.121 "name": "raid_bdev1", 00:14:41.121 "uuid": "92735871-f69e-4eec-aad9-c68670aad6f4", 00:14:41.121 "strip_size_kb": 64, 00:14:41.121 "state": "online", 00:14:41.121 "raid_level": "concat", 00:14:41.121 "superblock": true, 00:14:41.121 "num_base_bdevs": 4, 00:14:41.121 "num_base_bdevs_discovered": 4, 00:14:41.121 "num_base_bdevs_operational": 4, 00:14:41.121 "base_bdevs_list": [ 00:14:41.121 { 00:14:41.121 "name": "BaseBdev1", 00:14:41.121 "uuid": "98880b3d-60c3-59e3-beea-7c0eaf027b40", 00:14:41.121 "is_configured": true, 00:14:41.121 "data_offset": 2048, 00:14:41.121 "data_size": 63488 00:14:41.121 }, 00:14:41.121 { 00:14:41.121 "name": "BaseBdev2", 00:14:41.121 "uuid": "0b9b84cb-c3e1-55f6-9af3-be87a37ab591", 00:14:41.121 "is_configured": true, 00:14:41.121 "data_offset": 2048, 00:14:41.121 "data_size": 63488 00:14:41.121 }, 00:14:41.121 { 00:14:41.121 "name": "BaseBdev3", 00:14:41.121 "uuid": "f143e15a-89c0-52a3-b329-e99af73da49b", 00:14:41.121 "is_configured": true, 00:14:41.121 "data_offset": 2048, 00:14:41.121 "data_size": 63488 00:14:41.121 }, 00:14:41.121 { 00:14:41.121 "name": "BaseBdev4", 00:14:41.121 "uuid": "83f04a3e-2106-510a-9b14-5abfb023b2c8", 00:14:41.121 "is_configured": true, 00:14:41.121 "data_offset": 2048, 00:14:41.121 "data_size": 63488 00:14:41.121 } 00:14:41.121 ] 00:14:41.121 }' 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.121 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.378 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.378 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.378 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.378 [2024-11-26 06:24:25.507509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.378 [2024-11-26 06:24:25.507650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.378 [2024-11-26 06:24:25.510638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.378 [2024-11-26 06:24:25.510741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.378 [2024-11-26 06:24:25.510827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.378 [2024-11-26 06:24:25.510903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:41.637 { 00:14:41.637 "results": [ 00:14:41.637 { 00:14:41.637 "job": "raid_bdev1", 00:14:41.637 "core_mask": "0x1", 00:14:41.637 "workload": "randrw", 00:14:41.637 "percentage": 50, 00:14:41.637 "status": "finished", 00:14:41.637 "queue_depth": 1, 00:14:41.637 "io_size": 131072, 00:14:41.637 "runtime": 1.421581, 00:14:41.637 "iops": 13976.692147686274, 00:14:41.637 "mibps": 1747.0865184607842, 00:14:41.637 "io_failed": 1, 00:14:41.637 "io_timeout": 0, 00:14:41.637 "avg_latency_us": 99.45387024392174, 00:14:41.637 "min_latency_us": 27.388646288209607, 00:14:41.637 "max_latency_us": 1538.235807860262 00:14:41.637 } 00:14:41.637 ], 00:14:41.637 "core_count": 1 00:14:41.637 } 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73542 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73542 ']' 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73542 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73542 00:14:41.637 killing process with pid 73542 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73542' 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73542 00:14:41.637 [2024-11-26 06:24:25.550105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.637 06:24:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73542 00:14:41.897 [2024-11-26 06:24:25.905162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k1pxZ9OG2X 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:14:43.274 00:14:43.274 real 0m4.955s 00:14:43.274 user 0m5.854s 00:14:43.274 sys 0m0.646s 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.274 06:24:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.274 ************************************ 00:14:43.274 END TEST raid_write_error_test 00:14:43.274 ************************************ 00:14:43.274 06:24:27 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:43.274 06:24:27 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:43.274 06:24:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.274 06:24:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.274 06:24:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.274 ************************************ 00:14:43.274 START TEST raid_state_function_test 00:14:43.274 ************************************ 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73691 00:14:43.274 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73691' 00:14:43.274 Process raid pid: 73691 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73691 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73691 ']' 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.275 06:24:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.275 [2024-11-26 06:24:27.389188] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:43.275 [2024-11-26 06:24:27.389449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.535 [2024-11-26 06:24:27.576192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.795 [2024-11-26 06:24:27.704202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.795 [2024-11-26 06:24:27.912883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.795 [2024-11-26 06:24:27.913024] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.365 [2024-11-26 06:24:28.271293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.365 [2024-11-26 06:24:28.271373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.365 [2024-11-26 06:24:28.271391] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.365 [2024-11-26 06:24:28.271407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.365 [2024-11-26 06:24:28.271417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.365 [2024-11-26 06:24:28.271431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.365 [2024-11-26 06:24:28.271440] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.365 [2024-11-26 06:24:28.271453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.365 "name": "Existed_Raid", 00:14:44.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.365 "strip_size_kb": 0, 00:14:44.365 "state": "configuring", 00:14:44.365 "raid_level": "raid1", 00:14:44.365 "superblock": false, 00:14:44.365 "num_base_bdevs": 4, 00:14:44.365 "num_base_bdevs_discovered": 0, 00:14:44.365 "num_base_bdevs_operational": 4, 00:14:44.365 "base_bdevs_list": [ 00:14:44.365 { 00:14:44.365 "name": "BaseBdev1", 00:14:44.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.365 "is_configured": false, 00:14:44.365 "data_offset": 0, 00:14:44.365 "data_size": 0 00:14:44.365 }, 00:14:44.365 { 00:14:44.365 "name": "BaseBdev2", 00:14:44.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.365 "is_configured": false, 00:14:44.365 "data_offset": 0, 00:14:44.365 "data_size": 0 00:14:44.365 }, 00:14:44.365 { 00:14:44.365 "name": "BaseBdev3", 00:14:44.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.365 "is_configured": false, 00:14:44.365 "data_offset": 0, 00:14:44.365 "data_size": 0 00:14:44.365 }, 00:14:44.365 { 00:14:44.365 "name": "BaseBdev4", 00:14:44.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.365 "is_configured": false, 00:14:44.365 "data_offset": 0, 00:14:44.365 "data_size": 0 00:14:44.365 } 00:14:44.365 ] 00:14:44.365 }' 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.365 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.623 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.623 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.623 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.623 [2024-11-26 06:24:28.750461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.623 [2024-11-26 06:24:28.750605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:44.623 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.624 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.883 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.883 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.883 [2024-11-26 06:24:28.762424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.883 [2024-11-26 06:24:28.762541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.884 [2024-11-26 06:24:28.762569] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.884 [2024-11-26 06:24:28.762592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.884 [2024-11-26 06:24:28.762611] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.884 [2024-11-26 06:24:28.762633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.884 [2024-11-26 06:24:28.762651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.884 [2024-11-26 06:24:28.762698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 [2024-11-26 06:24:28.815843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.884 BaseBdev1 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 [ 00:14:44.884 { 00:14:44.884 "name": "BaseBdev1", 00:14:44.884 "aliases": [ 00:14:44.884 "db846d3a-9f98-44c6-a62a-ce8572fe69d1" 00:14:44.884 ], 00:14:44.884 "product_name": "Malloc disk", 00:14:44.884 "block_size": 512, 00:14:44.884 "num_blocks": 65536, 00:14:44.884 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:44.884 "assigned_rate_limits": { 00:14:44.884 "rw_ios_per_sec": 0, 00:14:44.884 "rw_mbytes_per_sec": 0, 00:14:44.884 "r_mbytes_per_sec": 0, 00:14:44.884 "w_mbytes_per_sec": 0 00:14:44.884 }, 00:14:44.884 "claimed": true, 00:14:44.884 "claim_type": "exclusive_write", 00:14:44.884 "zoned": false, 00:14:44.884 "supported_io_types": { 00:14:44.884 "read": true, 00:14:44.884 "write": true, 00:14:44.884 "unmap": true, 00:14:44.884 "flush": true, 00:14:44.884 "reset": true, 00:14:44.884 "nvme_admin": false, 00:14:44.884 "nvme_io": false, 00:14:44.884 "nvme_io_md": false, 00:14:44.884 "write_zeroes": true, 00:14:44.884 "zcopy": true, 00:14:44.884 "get_zone_info": false, 00:14:44.884 "zone_management": false, 00:14:44.884 "zone_append": false, 00:14:44.884 "compare": false, 00:14:44.884 "compare_and_write": false, 00:14:44.884 "abort": true, 00:14:44.884 "seek_hole": false, 00:14:44.884 "seek_data": false, 00:14:44.884 "copy": true, 00:14:44.884 "nvme_iov_md": false 00:14:44.884 }, 00:14:44.884 "memory_domains": [ 00:14:44.884 { 00:14:44.884 "dma_device_id": "system", 00:14:44.884 "dma_device_type": 1 00:14:44.884 }, 00:14:44.884 { 00:14:44.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.884 "dma_device_type": 2 00:14:44.884 } 00:14:44.884 ], 00:14:44.884 "driver_specific": {} 00:14:44.884 } 00:14:44.884 ] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.884 "name": "Existed_Raid", 00:14:44.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.884 "strip_size_kb": 0, 00:14:44.884 "state": "configuring", 00:14:44.884 "raid_level": "raid1", 00:14:44.884 "superblock": false, 00:14:44.884 "num_base_bdevs": 4, 00:14:44.884 "num_base_bdevs_discovered": 1, 00:14:44.884 "num_base_bdevs_operational": 4, 00:14:44.884 "base_bdevs_list": [ 00:14:44.884 { 00:14:44.884 "name": "BaseBdev1", 00:14:44.884 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:44.884 "is_configured": true, 00:14:44.884 "data_offset": 0, 00:14:44.884 "data_size": 65536 00:14:44.884 }, 00:14:44.884 { 00:14:44.884 "name": "BaseBdev2", 00:14:44.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.884 "is_configured": false, 00:14:44.884 "data_offset": 0, 00:14:44.884 "data_size": 0 00:14:44.884 }, 00:14:44.884 { 00:14:44.884 "name": "BaseBdev3", 00:14:44.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.884 "is_configured": false, 00:14:44.884 "data_offset": 0, 00:14:44.884 "data_size": 0 00:14:44.884 }, 00:14:44.884 { 00:14:44.884 "name": "BaseBdev4", 00:14:44.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.884 "is_configured": false, 00:14:44.884 "data_offset": 0, 00:14:44.884 "data_size": 0 00:14:44.884 } 00:14:44.884 ] 00:14:44.884 }' 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.884 06:24:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.453 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.453 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.453 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.454 [2024-11-26 06:24:29.335053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.454 [2024-11-26 06:24:29.335230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.454 [2024-11-26 06:24:29.347080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.454 [2024-11-26 06:24:29.349264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.454 [2024-11-26 06:24:29.349350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.454 [2024-11-26 06:24:29.349383] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.454 [2024-11-26 06:24:29.349412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.454 [2024-11-26 06:24:29.349433] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:45.454 [2024-11-26 06:24:29.349458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.454 "name": "Existed_Raid", 00:14:45.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.454 "strip_size_kb": 0, 00:14:45.454 "state": "configuring", 00:14:45.454 "raid_level": "raid1", 00:14:45.454 "superblock": false, 00:14:45.454 "num_base_bdevs": 4, 00:14:45.454 "num_base_bdevs_discovered": 1, 00:14:45.454 "num_base_bdevs_operational": 4, 00:14:45.454 "base_bdevs_list": [ 00:14:45.454 { 00:14:45.454 "name": "BaseBdev1", 00:14:45.454 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:45.454 "is_configured": true, 00:14:45.454 "data_offset": 0, 00:14:45.454 "data_size": 65536 00:14:45.454 }, 00:14:45.454 { 00:14:45.454 "name": "BaseBdev2", 00:14:45.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.454 "is_configured": false, 00:14:45.454 "data_offset": 0, 00:14:45.454 "data_size": 0 00:14:45.454 }, 00:14:45.454 { 00:14:45.454 "name": "BaseBdev3", 00:14:45.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.454 "is_configured": false, 00:14:45.454 "data_offset": 0, 00:14:45.454 "data_size": 0 00:14:45.454 }, 00:14:45.454 { 00:14:45.454 "name": "BaseBdev4", 00:14:45.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.454 "is_configured": false, 00:14:45.454 "data_offset": 0, 00:14:45.454 "data_size": 0 00:14:45.454 } 00:14:45.454 ] 00:14:45.454 }' 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.454 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.713 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.713 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 [2024-11-26 06:24:29.865476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.973 BaseBdev2 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 [ 00:14:45.973 { 00:14:45.973 "name": "BaseBdev2", 00:14:45.973 "aliases": [ 00:14:45.973 "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf" 00:14:45.973 ], 00:14:45.973 "product_name": "Malloc disk", 00:14:45.973 "block_size": 512, 00:14:45.973 "num_blocks": 65536, 00:14:45.973 "uuid": "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf", 00:14:45.973 "assigned_rate_limits": { 00:14:45.973 "rw_ios_per_sec": 0, 00:14:45.973 "rw_mbytes_per_sec": 0, 00:14:45.973 "r_mbytes_per_sec": 0, 00:14:45.973 "w_mbytes_per_sec": 0 00:14:45.973 }, 00:14:45.973 "claimed": true, 00:14:45.973 "claim_type": "exclusive_write", 00:14:45.973 "zoned": false, 00:14:45.973 "supported_io_types": { 00:14:45.973 "read": true, 00:14:45.973 "write": true, 00:14:45.973 "unmap": true, 00:14:45.973 "flush": true, 00:14:45.973 "reset": true, 00:14:45.973 "nvme_admin": false, 00:14:45.973 "nvme_io": false, 00:14:45.973 "nvme_io_md": false, 00:14:45.973 "write_zeroes": true, 00:14:45.973 "zcopy": true, 00:14:45.973 "get_zone_info": false, 00:14:45.973 "zone_management": false, 00:14:45.973 "zone_append": false, 00:14:45.973 "compare": false, 00:14:45.973 "compare_and_write": false, 00:14:45.973 "abort": true, 00:14:45.973 "seek_hole": false, 00:14:45.973 "seek_data": false, 00:14:45.973 "copy": true, 00:14:45.973 "nvme_iov_md": false 00:14:45.973 }, 00:14:45.973 "memory_domains": [ 00:14:45.973 { 00:14:45.973 "dma_device_id": "system", 00:14:45.973 "dma_device_type": 1 00:14:45.973 }, 00:14:45.973 { 00:14:45.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.973 "dma_device_type": 2 00:14:45.973 } 00:14:45.973 ], 00:14:45.973 "driver_specific": {} 00:14:45.973 } 00:14:45.973 ] 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.973 "name": "Existed_Raid", 00:14:45.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.973 "strip_size_kb": 0, 00:14:45.973 "state": "configuring", 00:14:45.973 "raid_level": "raid1", 00:14:45.973 "superblock": false, 00:14:45.973 "num_base_bdevs": 4, 00:14:45.973 "num_base_bdevs_discovered": 2, 00:14:45.973 "num_base_bdevs_operational": 4, 00:14:45.973 "base_bdevs_list": [ 00:14:45.973 { 00:14:45.973 "name": "BaseBdev1", 00:14:45.973 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:45.973 "is_configured": true, 00:14:45.973 "data_offset": 0, 00:14:45.973 "data_size": 65536 00:14:45.973 }, 00:14:45.973 { 00:14:45.973 "name": "BaseBdev2", 00:14:45.973 "uuid": "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf", 00:14:45.973 "is_configured": true, 00:14:45.973 "data_offset": 0, 00:14:45.973 "data_size": 65536 00:14:45.973 }, 00:14:45.973 { 00:14:45.973 "name": "BaseBdev3", 00:14:45.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.973 "is_configured": false, 00:14:45.973 "data_offset": 0, 00:14:45.973 "data_size": 0 00:14:45.973 }, 00:14:45.973 { 00:14:45.973 "name": "BaseBdev4", 00:14:45.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.973 "is_configured": false, 00:14:45.973 "data_offset": 0, 00:14:45.973 "data_size": 0 00:14:45.973 } 00:14:45.973 ] 00:14:45.974 }' 00:14:45.974 06:24:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.974 06:24:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.233 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.233 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.233 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.494 [2024-11-26 06:24:30.396152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.494 BaseBdev3 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.494 [ 00:14:46.494 { 00:14:46.494 "name": "BaseBdev3", 00:14:46.494 "aliases": [ 00:14:46.494 "95ef0ff0-b2ae-4f1d-baa4-d022c8e836a7" 00:14:46.494 ], 00:14:46.494 "product_name": "Malloc disk", 00:14:46.494 "block_size": 512, 00:14:46.494 "num_blocks": 65536, 00:14:46.494 "uuid": "95ef0ff0-b2ae-4f1d-baa4-d022c8e836a7", 00:14:46.494 "assigned_rate_limits": { 00:14:46.494 "rw_ios_per_sec": 0, 00:14:46.494 "rw_mbytes_per_sec": 0, 00:14:46.494 "r_mbytes_per_sec": 0, 00:14:46.494 "w_mbytes_per_sec": 0 00:14:46.494 }, 00:14:46.494 "claimed": true, 00:14:46.494 "claim_type": "exclusive_write", 00:14:46.494 "zoned": false, 00:14:46.494 "supported_io_types": { 00:14:46.494 "read": true, 00:14:46.494 "write": true, 00:14:46.494 "unmap": true, 00:14:46.494 "flush": true, 00:14:46.494 "reset": true, 00:14:46.494 "nvme_admin": false, 00:14:46.494 "nvme_io": false, 00:14:46.494 "nvme_io_md": false, 00:14:46.494 "write_zeroes": true, 00:14:46.494 "zcopy": true, 00:14:46.494 "get_zone_info": false, 00:14:46.494 "zone_management": false, 00:14:46.494 "zone_append": false, 00:14:46.494 "compare": false, 00:14:46.494 "compare_and_write": false, 00:14:46.494 "abort": true, 00:14:46.494 "seek_hole": false, 00:14:46.494 "seek_data": false, 00:14:46.494 "copy": true, 00:14:46.494 "nvme_iov_md": false 00:14:46.494 }, 00:14:46.494 "memory_domains": [ 00:14:46.494 { 00:14:46.494 "dma_device_id": "system", 00:14:46.494 "dma_device_type": 1 00:14:46.494 }, 00:14:46.494 { 00:14:46.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.494 "dma_device_type": 2 00:14:46.494 } 00:14:46.494 ], 00:14:46.494 "driver_specific": {} 00:14:46.494 } 00:14:46.494 ] 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.494 "name": "Existed_Raid", 00:14:46.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.494 "strip_size_kb": 0, 00:14:46.494 "state": "configuring", 00:14:46.494 "raid_level": "raid1", 00:14:46.494 "superblock": false, 00:14:46.494 "num_base_bdevs": 4, 00:14:46.494 "num_base_bdevs_discovered": 3, 00:14:46.494 "num_base_bdevs_operational": 4, 00:14:46.494 "base_bdevs_list": [ 00:14:46.494 { 00:14:46.494 "name": "BaseBdev1", 00:14:46.494 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:46.494 "is_configured": true, 00:14:46.494 "data_offset": 0, 00:14:46.494 "data_size": 65536 00:14:46.494 }, 00:14:46.494 { 00:14:46.494 "name": "BaseBdev2", 00:14:46.494 "uuid": "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf", 00:14:46.494 "is_configured": true, 00:14:46.494 "data_offset": 0, 00:14:46.494 "data_size": 65536 00:14:46.494 }, 00:14:46.494 { 00:14:46.494 "name": "BaseBdev3", 00:14:46.494 "uuid": "95ef0ff0-b2ae-4f1d-baa4-d022c8e836a7", 00:14:46.494 "is_configured": true, 00:14:46.494 "data_offset": 0, 00:14:46.494 "data_size": 65536 00:14:46.494 }, 00:14:46.494 { 00:14:46.494 "name": "BaseBdev4", 00:14:46.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.494 "is_configured": false, 00:14:46.494 "data_offset": 0, 00:14:46.494 "data_size": 0 00:14:46.494 } 00:14:46.494 ] 00:14:46.494 }' 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.494 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.755 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:46.755 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.755 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.015 [2024-11-26 06:24:30.919947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:47.015 [2024-11-26 06:24:30.920134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:47.015 [2024-11-26 06:24:30.920200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:47.015 [2024-11-26 06:24:30.920591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:47.015 [2024-11-26 06:24:30.920847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:47.015 [2024-11-26 06:24:30.920899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:47.015 [2024-11-26 06:24:30.921256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.015 BaseBdev4 00:14:47.015 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.016 [ 00:14:47.016 { 00:14:47.016 "name": "BaseBdev4", 00:14:47.016 "aliases": [ 00:14:47.016 "b888c379-0935-4ae9-92b3-57bc74884876" 00:14:47.016 ], 00:14:47.016 "product_name": "Malloc disk", 00:14:47.016 "block_size": 512, 00:14:47.016 "num_blocks": 65536, 00:14:47.016 "uuid": "b888c379-0935-4ae9-92b3-57bc74884876", 00:14:47.016 "assigned_rate_limits": { 00:14:47.016 "rw_ios_per_sec": 0, 00:14:47.016 "rw_mbytes_per_sec": 0, 00:14:47.016 "r_mbytes_per_sec": 0, 00:14:47.016 "w_mbytes_per_sec": 0 00:14:47.016 }, 00:14:47.016 "claimed": true, 00:14:47.016 "claim_type": "exclusive_write", 00:14:47.016 "zoned": false, 00:14:47.016 "supported_io_types": { 00:14:47.016 "read": true, 00:14:47.016 "write": true, 00:14:47.016 "unmap": true, 00:14:47.016 "flush": true, 00:14:47.016 "reset": true, 00:14:47.016 "nvme_admin": false, 00:14:47.016 "nvme_io": false, 00:14:47.016 "nvme_io_md": false, 00:14:47.016 "write_zeroes": true, 00:14:47.016 "zcopy": true, 00:14:47.016 "get_zone_info": false, 00:14:47.016 "zone_management": false, 00:14:47.016 "zone_append": false, 00:14:47.016 "compare": false, 00:14:47.016 "compare_and_write": false, 00:14:47.016 "abort": true, 00:14:47.016 "seek_hole": false, 00:14:47.016 "seek_data": false, 00:14:47.016 "copy": true, 00:14:47.016 "nvme_iov_md": false 00:14:47.016 }, 00:14:47.016 "memory_domains": [ 00:14:47.016 { 00:14:47.016 "dma_device_id": "system", 00:14:47.016 "dma_device_type": 1 00:14:47.016 }, 00:14:47.016 { 00:14:47.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.016 "dma_device_type": 2 00:14:47.016 } 00:14:47.016 ], 00:14:47.016 "driver_specific": {} 00:14:47.016 } 00:14:47.016 ] 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.016 06:24:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.016 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.016 "name": "Existed_Raid", 00:14:47.016 "uuid": "d6b79144-5b6d-4b74-9a9e-12a314141fb7", 00:14:47.016 "strip_size_kb": 0, 00:14:47.016 "state": "online", 00:14:47.016 "raid_level": "raid1", 00:14:47.016 "superblock": false, 00:14:47.016 "num_base_bdevs": 4, 00:14:47.016 "num_base_bdevs_discovered": 4, 00:14:47.016 "num_base_bdevs_operational": 4, 00:14:47.016 "base_bdevs_list": [ 00:14:47.016 { 00:14:47.016 "name": "BaseBdev1", 00:14:47.016 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:47.016 "is_configured": true, 00:14:47.016 "data_offset": 0, 00:14:47.016 "data_size": 65536 00:14:47.016 }, 00:14:47.016 { 00:14:47.016 "name": "BaseBdev2", 00:14:47.016 "uuid": "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf", 00:14:47.016 "is_configured": true, 00:14:47.016 "data_offset": 0, 00:14:47.016 "data_size": 65536 00:14:47.016 }, 00:14:47.016 { 00:14:47.016 "name": "BaseBdev3", 00:14:47.016 "uuid": "95ef0ff0-b2ae-4f1d-baa4-d022c8e836a7", 00:14:47.016 "is_configured": true, 00:14:47.016 "data_offset": 0, 00:14:47.016 "data_size": 65536 00:14:47.016 }, 00:14:47.016 { 00:14:47.016 "name": "BaseBdev4", 00:14:47.016 "uuid": "b888c379-0935-4ae9-92b3-57bc74884876", 00:14:47.016 "is_configured": true, 00:14:47.016 "data_offset": 0, 00:14:47.016 "data_size": 65536 00:14:47.016 } 00:14:47.016 ] 00:14:47.016 }' 00:14:47.016 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.016 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.276 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.536 [2024-11-26 06:24:31.415607] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.536 "name": "Existed_Raid", 00:14:47.536 "aliases": [ 00:14:47.536 "d6b79144-5b6d-4b74-9a9e-12a314141fb7" 00:14:47.536 ], 00:14:47.536 "product_name": "Raid Volume", 00:14:47.536 "block_size": 512, 00:14:47.536 "num_blocks": 65536, 00:14:47.536 "uuid": "d6b79144-5b6d-4b74-9a9e-12a314141fb7", 00:14:47.536 "assigned_rate_limits": { 00:14:47.536 "rw_ios_per_sec": 0, 00:14:47.536 "rw_mbytes_per_sec": 0, 00:14:47.536 "r_mbytes_per_sec": 0, 00:14:47.536 "w_mbytes_per_sec": 0 00:14:47.536 }, 00:14:47.536 "claimed": false, 00:14:47.536 "zoned": false, 00:14:47.536 "supported_io_types": { 00:14:47.536 "read": true, 00:14:47.536 "write": true, 00:14:47.536 "unmap": false, 00:14:47.536 "flush": false, 00:14:47.536 "reset": true, 00:14:47.536 "nvme_admin": false, 00:14:47.536 "nvme_io": false, 00:14:47.536 "nvme_io_md": false, 00:14:47.536 "write_zeroes": true, 00:14:47.536 "zcopy": false, 00:14:47.536 "get_zone_info": false, 00:14:47.536 "zone_management": false, 00:14:47.536 "zone_append": false, 00:14:47.536 "compare": false, 00:14:47.536 "compare_and_write": false, 00:14:47.536 "abort": false, 00:14:47.536 "seek_hole": false, 00:14:47.536 "seek_data": false, 00:14:47.536 "copy": false, 00:14:47.536 "nvme_iov_md": false 00:14:47.536 }, 00:14:47.536 "memory_domains": [ 00:14:47.536 { 00:14:47.536 "dma_device_id": "system", 00:14:47.536 "dma_device_type": 1 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.536 "dma_device_type": 2 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "system", 00:14:47.536 "dma_device_type": 1 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.536 "dma_device_type": 2 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "system", 00:14:47.536 "dma_device_type": 1 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.536 "dma_device_type": 2 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "system", 00:14:47.536 "dma_device_type": 1 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.536 "dma_device_type": 2 00:14:47.536 } 00:14:47.536 ], 00:14:47.536 "driver_specific": { 00:14:47.536 "raid": { 00:14:47.536 "uuid": "d6b79144-5b6d-4b74-9a9e-12a314141fb7", 00:14:47.536 "strip_size_kb": 0, 00:14:47.536 "state": "online", 00:14:47.536 "raid_level": "raid1", 00:14:47.536 "superblock": false, 00:14:47.536 "num_base_bdevs": 4, 00:14:47.536 "num_base_bdevs_discovered": 4, 00:14:47.536 "num_base_bdevs_operational": 4, 00:14:47.536 "base_bdevs_list": [ 00:14:47.536 { 00:14:47.536 "name": "BaseBdev1", 00:14:47.536 "uuid": "db846d3a-9f98-44c6-a62a-ce8572fe69d1", 00:14:47.536 "is_configured": true, 00:14:47.536 "data_offset": 0, 00:14:47.536 "data_size": 65536 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "name": "BaseBdev2", 00:14:47.536 "uuid": "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf", 00:14:47.536 "is_configured": true, 00:14:47.536 "data_offset": 0, 00:14:47.536 "data_size": 65536 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "name": "BaseBdev3", 00:14:47.536 "uuid": "95ef0ff0-b2ae-4f1d-baa4-d022c8e836a7", 00:14:47.536 "is_configured": true, 00:14:47.536 "data_offset": 0, 00:14:47.536 "data_size": 65536 00:14:47.536 }, 00:14:47.536 { 00:14:47.536 "name": "BaseBdev4", 00:14:47.536 "uuid": "b888c379-0935-4ae9-92b3-57bc74884876", 00:14:47.536 "is_configured": true, 00:14:47.536 "data_offset": 0, 00:14:47.536 "data_size": 65536 00:14:47.536 } 00:14:47.536 ] 00:14:47.536 } 00:14:47.536 } 00:14:47.536 }' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:47.536 BaseBdev2 00:14:47.536 BaseBdev3 00:14:47.536 BaseBdev4' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.536 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.797 [2024-11-26 06:24:31.722854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.797 "name": "Existed_Raid", 00:14:47.797 "uuid": "d6b79144-5b6d-4b74-9a9e-12a314141fb7", 00:14:47.797 "strip_size_kb": 0, 00:14:47.797 "state": "online", 00:14:47.797 "raid_level": "raid1", 00:14:47.797 "superblock": false, 00:14:47.797 "num_base_bdevs": 4, 00:14:47.797 "num_base_bdevs_discovered": 3, 00:14:47.797 "num_base_bdevs_operational": 3, 00:14:47.797 "base_bdevs_list": [ 00:14:47.797 { 00:14:47.797 "name": null, 00:14:47.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.797 "is_configured": false, 00:14:47.797 "data_offset": 0, 00:14:47.797 "data_size": 65536 00:14:47.797 }, 00:14:47.797 { 00:14:47.797 "name": "BaseBdev2", 00:14:47.797 "uuid": "5e94ccca-5a67-44b3-b69b-2c55c05a0ecf", 00:14:47.797 "is_configured": true, 00:14:47.797 "data_offset": 0, 00:14:47.797 "data_size": 65536 00:14:47.797 }, 00:14:47.797 { 00:14:47.797 "name": "BaseBdev3", 00:14:47.797 "uuid": "95ef0ff0-b2ae-4f1d-baa4-d022c8e836a7", 00:14:47.797 "is_configured": true, 00:14:47.797 "data_offset": 0, 00:14:47.797 "data_size": 65536 00:14:47.797 }, 00:14:47.797 { 00:14:47.797 "name": "BaseBdev4", 00:14:47.797 "uuid": "b888c379-0935-4ae9-92b3-57bc74884876", 00:14:47.797 "is_configured": true, 00:14:47.797 "data_offset": 0, 00:14:47.797 "data_size": 65536 00:14:47.797 } 00:14:47.797 ] 00:14:47.797 }' 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.797 06:24:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.366 [2024-11-26 06:24:32.305311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.366 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.366 [2024-11-26 06:24:32.468366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.625 [2024-11-26 06:24:32.631224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:48.625 [2024-11-26 06:24:32.631412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.625 [2024-11-26 06:24:32.741342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.625 [2024-11-26 06:24:32.741493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.625 [2024-11-26 06:24:32.741597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.625 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.885 BaseBdev2 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.885 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.885 [ 00:14:48.885 { 00:14:48.885 "name": "BaseBdev2", 00:14:48.885 "aliases": [ 00:14:48.885 "7e91a0d8-00f4-4c65-9c58-a40890c42995" 00:14:48.885 ], 00:14:48.885 "product_name": "Malloc disk", 00:14:48.885 "block_size": 512, 00:14:48.885 "num_blocks": 65536, 00:14:48.885 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:48.885 "assigned_rate_limits": { 00:14:48.885 "rw_ios_per_sec": 0, 00:14:48.885 "rw_mbytes_per_sec": 0, 00:14:48.885 "r_mbytes_per_sec": 0, 00:14:48.885 "w_mbytes_per_sec": 0 00:14:48.885 }, 00:14:48.885 "claimed": false, 00:14:48.885 "zoned": false, 00:14:48.885 "supported_io_types": { 00:14:48.885 "read": true, 00:14:48.885 "write": true, 00:14:48.885 "unmap": true, 00:14:48.885 "flush": true, 00:14:48.885 "reset": true, 00:14:48.885 "nvme_admin": false, 00:14:48.885 "nvme_io": false, 00:14:48.885 "nvme_io_md": false, 00:14:48.885 "write_zeroes": true, 00:14:48.886 "zcopy": true, 00:14:48.886 "get_zone_info": false, 00:14:48.886 "zone_management": false, 00:14:48.886 "zone_append": false, 00:14:48.886 "compare": false, 00:14:48.886 "compare_and_write": false, 00:14:48.886 "abort": true, 00:14:48.886 "seek_hole": false, 00:14:48.886 "seek_data": false, 00:14:48.886 "copy": true, 00:14:48.886 "nvme_iov_md": false 00:14:48.886 }, 00:14:48.886 "memory_domains": [ 00:14:48.886 { 00:14:48.886 "dma_device_id": "system", 00:14:48.886 "dma_device_type": 1 00:14:48.886 }, 00:14:48.886 { 00:14:48.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.886 "dma_device_type": 2 00:14:48.886 } 00:14:48.886 ], 00:14:48.886 "driver_specific": {} 00:14:48.886 } 00:14:48.886 ] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.886 BaseBdev3 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.886 [ 00:14:48.886 { 00:14:48.886 "name": "BaseBdev3", 00:14:48.886 "aliases": [ 00:14:48.886 "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4" 00:14:48.886 ], 00:14:48.886 "product_name": "Malloc disk", 00:14:48.886 "block_size": 512, 00:14:48.886 "num_blocks": 65536, 00:14:48.886 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:48.886 "assigned_rate_limits": { 00:14:48.886 "rw_ios_per_sec": 0, 00:14:48.886 "rw_mbytes_per_sec": 0, 00:14:48.886 "r_mbytes_per_sec": 0, 00:14:48.886 "w_mbytes_per_sec": 0 00:14:48.886 }, 00:14:48.886 "claimed": false, 00:14:48.886 "zoned": false, 00:14:48.886 "supported_io_types": { 00:14:48.886 "read": true, 00:14:48.886 "write": true, 00:14:48.886 "unmap": true, 00:14:48.886 "flush": true, 00:14:48.886 "reset": true, 00:14:48.886 "nvme_admin": false, 00:14:48.886 "nvme_io": false, 00:14:48.886 "nvme_io_md": false, 00:14:48.886 "write_zeroes": true, 00:14:48.886 "zcopy": true, 00:14:48.886 "get_zone_info": false, 00:14:48.886 "zone_management": false, 00:14:48.886 "zone_append": false, 00:14:48.886 "compare": false, 00:14:48.886 "compare_and_write": false, 00:14:48.886 "abort": true, 00:14:48.886 "seek_hole": false, 00:14:48.886 "seek_data": false, 00:14:48.886 "copy": true, 00:14:48.886 "nvme_iov_md": false 00:14:48.886 }, 00:14:48.886 "memory_domains": [ 00:14:48.886 { 00:14:48.886 "dma_device_id": "system", 00:14:48.886 "dma_device_type": 1 00:14:48.886 }, 00:14:48.886 { 00:14:48.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.886 "dma_device_type": 2 00:14:48.886 } 00:14:48.886 ], 00:14:48.886 "driver_specific": {} 00:14:48.886 } 00:14:48.886 ] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.886 06:24:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.146 BaseBdev4 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.146 [ 00:14:49.146 { 00:14:49.146 "name": "BaseBdev4", 00:14:49.146 "aliases": [ 00:14:49.146 "1560bd1e-24f4-40cd-ad76-f1049169c10c" 00:14:49.146 ], 00:14:49.146 "product_name": "Malloc disk", 00:14:49.146 "block_size": 512, 00:14:49.146 "num_blocks": 65536, 00:14:49.146 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:49.146 "assigned_rate_limits": { 00:14:49.146 "rw_ios_per_sec": 0, 00:14:49.146 "rw_mbytes_per_sec": 0, 00:14:49.146 "r_mbytes_per_sec": 0, 00:14:49.146 "w_mbytes_per_sec": 0 00:14:49.146 }, 00:14:49.146 "claimed": false, 00:14:49.146 "zoned": false, 00:14:49.146 "supported_io_types": { 00:14:49.146 "read": true, 00:14:49.146 "write": true, 00:14:49.146 "unmap": true, 00:14:49.146 "flush": true, 00:14:49.146 "reset": true, 00:14:49.146 "nvme_admin": false, 00:14:49.146 "nvme_io": false, 00:14:49.146 "nvme_io_md": false, 00:14:49.146 "write_zeroes": true, 00:14:49.146 "zcopy": true, 00:14:49.146 "get_zone_info": false, 00:14:49.146 "zone_management": false, 00:14:49.146 "zone_append": false, 00:14:49.146 "compare": false, 00:14:49.146 "compare_and_write": false, 00:14:49.146 "abort": true, 00:14:49.146 "seek_hole": false, 00:14:49.146 "seek_data": false, 00:14:49.146 "copy": true, 00:14:49.146 "nvme_iov_md": false 00:14:49.146 }, 00:14:49.146 "memory_domains": [ 00:14:49.146 { 00:14:49.146 "dma_device_id": "system", 00:14:49.146 "dma_device_type": 1 00:14:49.146 }, 00:14:49.146 { 00:14:49.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.146 "dma_device_type": 2 00:14:49.146 } 00:14:49.146 ], 00:14:49.146 "driver_specific": {} 00:14:49.146 } 00:14:49.146 ] 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.146 [2024-11-26 06:24:33.063907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.146 [2024-11-26 06:24:33.064091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.146 [2024-11-26 06:24:33.064122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.146 [2024-11-26 06:24:33.066174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.146 [2024-11-26 06:24:33.066225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.146 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.147 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.147 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.147 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.147 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.147 "name": "Existed_Raid", 00:14:49.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.147 "strip_size_kb": 0, 00:14:49.147 "state": "configuring", 00:14:49.147 "raid_level": "raid1", 00:14:49.147 "superblock": false, 00:14:49.147 "num_base_bdevs": 4, 00:14:49.147 "num_base_bdevs_discovered": 3, 00:14:49.147 "num_base_bdevs_operational": 4, 00:14:49.147 "base_bdevs_list": [ 00:14:49.147 { 00:14:49.147 "name": "BaseBdev1", 00:14:49.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.147 "is_configured": false, 00:14:49.147 "data_offset": 0, 00:14:49.147 "data_size": 0 00:14:49.147 }, 00:14:49.147 { 00:14:49.147 "name": "BaseBdev2", 00:14:49.147 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:49.147 "is_configured": true, 00:14:49.147 "data_offset": 0, 00:14:49.147 "data_size": 65536 00:14:49.147 }, 00:14:49.147 { 00:14:49.147 "name": "BaseBdev3", 00:14:49.147 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:49.147 "is_configured": true, 00:14:49.147 "data_offset": 0, 00:14:49.147 "data_size": 65536 00:14:49.147 }, 00:14:49.147 { 00:14:49.147 "name": "BaseBdev4", 00:14:49.147 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:49.147 "is_configured": true, 00:14:49.147 "data_offset": 0, 00:14:49.147 "data_size": 65536 00:14:49.147 } 00:14:49.147 ] 00:14:49.147 }' 00:14:49.147 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.147 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.406 [2024-11-26 06:24:33.527200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.406 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.665 "name": "Existed_Raid", 00:14:49.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.665 "strip_size_kb": 0, 00:14:49.665 "state": "configuring", 00:14:49.665 "raid_level": "raid1", 00:14:49.665 "superblock": false, 00:14:49.665 "num_base_bdevs": 4, 00:14:49.665 "num_base_bdevs_discovered": 2, 00:14:49.665 "num_base_bdevs_operational": 4, 00:14:49.665 "base_bdevs_list": [ 00:14:49.665 { 00:14:49.665 "name": "BaseBdev1", 00:14:49.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.665 "is_configured": false, 00:14:49.665 "data_offset": 0, 00:14:49.665 "data_size": 0 00:14:49.665 }, 00:14:49.665 { 00:14:49.665 "name": null, 00:14:49.665 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:49.665 "is_configured": false, 00:14:49.665 "data_offset": 0, 00:14:49.665 "data_size": 65536 00:14:49.665 }, 00:14:49.665 { 00:14:49.665 "name": "BaseBdev3", 00:14:49.665 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:49.665 "is_configured": true, 00:14:49.665 "data_offset": 0, 00:14:49.665 "data_size": 65536 00:14:49.665 }, 00:14:49.665 { 00:14:49.665 "name": "BaseBdev4", 00:14:49.665 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:49.665 "is_configured": true, 00:14:49.665 "data_offset": 0, 00:14:49.665 "data_size": 65536 00:14:49.665 } 00:14:49.665 ] 00:14:49.665 }' 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.665 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.925 06:24:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.925 [2024-11-26 06:24:34.034476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.925 BaseBdev1 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.925 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.185 [ 00:14:50.185 { 00:14:50.185 "name": "BaseBdev1", 00:14:50.185 "aliases": [ 00:14:50.185 "9dda56fb-63c4-41ab-af15-61fc2b66e0e6" 00:14:50.185 ], 00:14:50.185 "product_name": "Malloc disk", 00:14:50.185 "block_size": 512, 00:14:50.185 "num_blocks": 65536, 00:14:50.185 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:50.185 "assigned_rate_limits": { 00:14:50.185 "rw_ios_per_sec": 0, 00:14:50.185 "rw_mbytes_per_sec": 0, 00:14:50.185 "r_mbytes_per_sec": 0, 00:14:50.185 "w_mbytes_per_sec": 0 00:14:50.185 }, 00:14:50.185 "claimed": true, 00:14:50.185 "claim_type": "exclusive_write", 00:14:50.185 "zoned": false, 00:14:50.185 "supported_io_types": { 00:14:50.185 "read": true, 00:14:50.185 "write": true, 00:14:50.185 "unmap": true, 00:14:50.185 "flush": true, 00:14:50.185 "reset": true, 00:14:50.185 "nvme_admin": false, 00:14:50.185 "nvme_io": false, 00:14:50.185 "nvme_io_md": false, 00:14:50.185 "write_zeroes": true, 00:14:50.185 "zcopy": true, 00:14:50.185 "get_zone_info": false, 00:14:50.185 "zone_management": false, 00:14:50.185 "zone_append": false, 00:14:50.185 "compare": false, 00:14:50.185 "compare_and_write": false, 00:14:50.185 "abort": true, 00:14:50.185 "seek_hole": false, 00:14:50.185 "seek_data": false, 00:14:50.185 "copy": true, 00:14:50.185 "nvme_iov_md": false 00:14:50.185 }, 00:14:50.185 "memory_domains": [ 00:14:50.185 { 00:14:50.185 "dma_device_id": "system", 00:14:50.185 "dma_device_type": 1 00:14:50.185 }, 00:14:50.185 { 00:14:50.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.185 "dma_device_type": 2 00:14:50.185 } 00:14:50.185 ], 00:14:50.185 "driver_specific": {} 00:14:50.185 } 00:14:50.185 ] 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.185 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.185 "name": "Existed_Raid", 00:14:50.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.185 "strip_size_kb": 0, 00:14:50.185 "state": "configuring", 00:14:50.185 "raid_level": "raid1", 00:14:50.185 "superblock": false, 00:14:50.185 "num_base_bdevs": 4, 00:14:50.185 "num_base_bdevs_discovered": 3, 00:14:50.185 "num_base_bdevs_operational": 4, 00:14:50.185 "base_bdevs_list": [ 00:14:50.185 { 00:14:50.185 "name": "BaseBdev1", 00:14:50.185 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:50.185 "is_configured": true, 00:14:50.186 "data_offset": 0, 00:14:50.186 "data_size": 65536 00:14:50.186 }, 00:14:50.186 { 00:14:50.186 "name": null, 00:14:50.186 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:50.186 "is_configured": false, 00:14:50.186 "data_offset": 0, 00:14:50.186 "data_size": 65536 00:14:50.186 }, 00:14:50.186 { 00:14:50.186 "name": "BaseBdev3", 00:14:50.186 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:50.186 "is_configured": true, 00:14:50.186 "data_offset": 0, 00:14:50.186 "data_size": 65536 00:14:50.186 }, 00:14:50.186 { 00:14:50.186 "name": "BaseBdev4", 00:14:50.186 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:50.186 "is_configured": true, 00:14:50.186 "data_offset": 0, 00:14:50.186 "data_size": 65536 00:14:50.186 } 00:14:50.186 ] 00:14:50.186 }' 00:14:50.186 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.186 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.445 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.445 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.445 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.445 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.445 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.704 [2024-11-26 06:24:34.589747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.704 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.705 "name": "Existed_Raid", 00:14:50.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.705 "strip_size_kb": 0, 00:14:50.705 "state": "configuring", 00:14:50.705 "raid_level": "raid1", 00:14:50.705 "superblock": false, 00:14:50.705 "num_base_bdevs": 4, 00:14:50.705 "num_base_bdevs_discovered": 2, 00:14:50.705 "num_base_bdevs_operational": 4, 00:14:50.705 "base_bdevs_list": [ 00:14:50.705 { 00:14:50.705 "name": "BaseBdev1", 00:14:50.705 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:50.705 "is_configured": true, 00:14:50.705 "data_offset": 0, 00:14:50.705 "data_size": 65536 00:14:50.705 }, 00:14:50.705 { 00:14:50.705 "name": null, 00:14:50.705 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:50.705 "is_configured": false, 00:14:50.705 "data_offset": 0, 00:14:50.705 "data_size": 65536 00:14:50.705 }, 00:14:50.705 { 00:14:50.705 "name": null, 00:14:50.705 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:50.705 "is_configured": false, 00:14:50.705 "data_offset": 0, 00:14:50.705 "data_size": 65536 00:14:50.705 }, 00:14:50.705 { 00:14:50.705 "name": "BaseBdev4", 00:14:50.705 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:50.705 "is_configured": true, 00:14:50.705 "data_offset": 0, 00:14:50.705 "data_size": 65536 00:14:50.705 } 00:14:50.705 ] 00:14:50.705 }' 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.705 06:24:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.964 [2024-11-26 06:24:35.060967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.964 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.223 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.223 "name": "Existed_Raid", 00:14:51.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.223 "strip_size_kb": 0, 00:14:51.223 "state": "configuring", 00:14:51.223 "raid_level": "raid1", 00:14:51.223 "superblock": false, 00:14:51.223 "num_base_bdevs": 4, 00:14:51.223 "num_base_bdevs_discovered": 3, 00:14:51.223 "num_base_bdevs_operational": 4, 00:14:51.223 "base_bdevs_list": [ 00:14:51.223 { 00:14:51.223 "name": "BaseBdev1", 00:14:51.223 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:51.223 "is_configured": true, 00:14:51.223 "data_offset": 0, 00:14:51.223 "data_size": 65536 00:14:51.223 }, 00:14:51.223 { 00:14:51.223 "name": null, 00:14:51.223 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:51.223 "is_configured": false, 00:14:51.223 "data_offset": 0, 00:14:51.223 "data_size": 65536 00:14:51.223 }, 00:14:51.223 { 00:14:51.223 "name": "BaseBdev3", 00:14:51.223 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:51.223 "is_configured": true, 00:14:51.223 "data_offset": 0, 00:14:51.223 "data_size": 65536 00:14:51.223 }, 00:14:51.223 { 00:14:51.223 "name": "BaseBdev4", 00:14:51.223 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:51.223 "is_configured": true, 00:14:51.223 "data_offset": 0, 00:14:51.223 "data_size": 65536 00:14:51.223 } 00:14:51.223 ] 00:14:51.223 }' 00:14:51.223 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.223 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.483 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 [2024-11-26 06:24:35.608090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.784 "name": "Existed_Raid", 00:14:51.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.784 "strip_size_kb": 0, 00:14:51.784 "state": "configuring", 00:14:51.784 "raid_level": "raid1", 00:14:51.784 "superblock": false, 00:14:51.784 "num_base_bdevs": 4, 00:14:51.784 "num_base_bdevs_discovered": 2, 00:14:51.784 "num_base_bdevs_operational": 4, 00:14:51.784 "base_bdevs_list": [ 00:14:51.784 { 00:14:51.784 "name": null, 00:14:51.784 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:51.784 "is_configured": false, 00:14:51.784 "data_offset": 0, 00:14:51.784 "data_size": 65536 00:14:51.784 }, 00:14:51.784 { 00:14:51.784 "name": null, 00:14:51.785 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:51.785 "is_configured": false, 00:14:51.785 "data_offset": 0, 00:14:51.785 "data_size": 65536 00:14:51.785 }, 00:14:51.785 { 00:14:51.785 "name": "BaseBdev3", 00:14:51.785 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:51.785 "is_configured": true, 00:14:51.785 "data_offset": 0, 00:14:51.785 "data_size": 65536 00:14:51.785 }, 00:14:51.785 { 00:14:51.785 "name": "BaseBdev4", 00:14:51.785 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:51.785 "is_configured": true, 00:14:51.785 "data_offset": 0, 00:14:51.785 "data_size": 65536 00:14:51.785 } 00:14:51.785 ] 00:14:51.785 }' 00:14:51.785 06:24:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.785 06:24:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.354 [2024-11-26 06:24:36.256754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.354 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.355 "name": "Existed_Raid", 00:14:52.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.355 "strip_size_kb": 0, 00:14:52.355 "state": "configuring", 00:14:52.355 "raid_level": "raid1", 00:14:52.355 "superblock": false, 00:14:52.355 "num_base_bdevs": 4, 00:14:52.355 "num_base_bdevs_discovered": 3, 00:14:52.355 "num_base_bdevs_operational": 4, 00:14:52.355 "base_bdevs_list": [ 00:14:52.355 { 00:14:52.355 "name": null, 00:14:52.355 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:52.355 "is_configured": false, 00:14:52.355 "data_offset": 0, 00:14:52.355 "data_size": 65536 00:14:52.355 }, 00:14:52.355 { 00:14:52.355 "name": "BaseBdev2", 00:14:52.355 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:52.355 "is_configured": true, 00:14:52.355 "data_offset": 0, 00:14:52.355 "data_size": 65536 00:14:52.355 }, 00:14:52.355 { 00:14:52.355 "name": "BaseBdev3", 00:14:52.355 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:52.355 "is_configured": true, 00:14:52.355 "data_offset": 0, 00:14:52.355 "data_size": 65536 00:14:52.355 }, 00:14:52.355 { 00:14:52.355 "name": "BaseBdev4", 00:14:52.355 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:52.355 "is_configured": true, 00:14:52.355 "data_offset": 0, 00:14:52.355 "data_size": 65536 00:14:52.355 } 00:14:52.355 ] 00:14:52.355 }' 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.355 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.614 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.614 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.614 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.614 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9dda56fb-63c4-41ab-af15-61fc2b66e0e6 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.873 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.873 [2024-11-26 06:24:36.868226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:52.874 [2024-11-26 06:24:36.868386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:52.874 [2024-11-26 06:24:36.868415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:52.874 [2024-11-26 06:24:36.868736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:52.874 [2024-11-26 06:24:36.868952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:52.874 [2024-11-26 06:24:36.868994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:52.874 [2024-11-26 06:24:36.869322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.874 NewBaseBdev 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.874 [ 00:14:52.874 { 00:14:52.874 "name": "NewBaseBdev", 00:14:52.874 "aliases": [ 00:14:52.874 "9dda56fb-63c4-41ab-af15-61fc2b66e0e6" 00:14:52.874 ], 00:14:52.874 "product_name": "Malloc disk", 00:14:52.874 "block_size": 512, 00:14:52.874 "num_blocks": 65536, 00:14:52.874 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:52.874 "assigned_rate_limits": { 00:14:52.874 "rw_ios_per_sec": 0, 00:14:52.874 "rw_mbytes_per_sec": 0, 00:14:52.874 "r_mbytes_per_sec": 0, 00:14:52.874 "w_mbytes_per_sec": 0 00:14:52.874 }, 00:14:52.874 "claimed": true, 00:14:52.874 "claim_type": "exclusive_write", 00:14:52.874 "zoned": false, 00:14:52.874 "supported_io_types": { 00:14:52.874 "read": true, 00:14:52.874 "write": true, 00:14:52.874 "unmap": true, 00:14:52.874 "flush": true, 00:14:52.874 "reset": true, 00:14:52.874 "nvme_admin": false, 00:14:52.874 "nvme_io": false, 00:14:52.874 "nvme_io_md": false, 00:14:52.874 "write_zeroes": true, 00:14:52.874 "zcopy": true, 00:14:52.874 "get_zone_info": false, 00:14:52.874 "zone_management": false, 00:14:52.874 "zone_append": false, 00:14:52.874 "compare": false, 00:14:52.874 "compare_and_write": false, 00:14:52.874 "abort": true, 00:14:52.874 "seek_hole": false, 00:14:52.874 "seek_data": false, 00:14:52.874 "copy": true, 00:14:52.874 "nvme_iov_md": false 00:14:52.874 }, 00:14:52.874 "memory_domains": [ 00:14:52.874 { 00:14:52.874 "dma_device_id": "system", 00:14:52.874 "dma_device_type": 1 00:14:52.874 }, 00:14:52.874 { 00:14:52.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.874 "dma_device_type": 2 00:14:52.874 } 00:14:52.874 ], 00:14:52.874 "driver_specific": {} 00:14:52.874 } 00:14:52.874 ] 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.874 "name": "Existed_Raid", 00:14:52.874 "uuid": "520ecf1b-a93f-4550-bda9-25ef5f4a0445", 00:14:52.874 "strip_size_kb": 0, 00:14:52.874 "state": "online", 00:14:52.874 "raid_level": "raid1", 00:14:52.874 "superblock": false, 00:14:52.874 "num_base_bdevs": 4, 00:14:52.874 "num_base_bdevs_discovered": 4, 00:14:52.874 "num_base_bdevs_operational": 4, 00:14:52.874 "base_bdevs_list": [ 00:14:52.874 { 00:14:52.874 "name": "NewBaseBdev", 00:14:52.874 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:52.874 "is_configured": true, 00:14:52.874 "data_offset": 0, 00:14:52.874 "data_size": 65536 00:14:52.874 }, 00:14:52.874 { 00:14:52.874 "name": "BaseBdev2", 00:14:52.874 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:52.874 "is_configured": true, 00:14:52.874 "data_offset": 0, 00:14:52.874 "data_size": 65536 00:14:52.874 }, 00:14:52.874 { 00:14:52.874 "name": "BaseBdev3", 00:14:52.874 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:52.874 "is_configured": true, 00:14:52.874 "data_offset": 0, 00:14:52.874 "data_size": 65536 00:14:52.874 }, 00:14:52.874 { 00:14:52.874 "name": "BaseBdev4", 00:14:52.874 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:52.874 "is_configured": true, 00:14:52.874 "data_offset": 0, 00:14:52.874 "data_size": 65536 00:14:52.874 } 00:14:52.874 ] 00:14:52.874 }' 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.874 06:24:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.442 [2024-11-26 06:24:37.439782] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.442 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.442 "name": "Existed_Raid", 00:14:53.442 "aliases": [ 00:14:53.442 "520ecf1b-a93f-4550-bda9-25ef5f4a0445" 00:14:53.442 ], 00:14:53.442 "product_name": "Raid Volume", 00:14:53.442 "block_size": 512, 00:14:53.442 "num_blocks": 65536, 00:14:53.442 "uuid": "520ecf1b-a93f-4550-bda9-25ef5f4a0445", 00:14:53.442 "assigned_rate_limits": { 00:14:53.442 "rw_ios_per_sec": 0, 00:14:53.442 "rw_mbytes_per_sec": 0, 00:14:53.442 "r_mbytes_per_sec": 0, 00:14:53.442 "w_mbytes_per_sec": 0 00:14:53.442 }, 00:14:53.442 "claimed": false, 00:14:53.442 "zoned": false, 00:14:53.442 "supported_io_types": { 00:14:53.442 "read": true, 00:14:53.442 "write": true, 00:14:53.442 "unmap": false, 00:14:53.442 "flush": false, 00:14:53.442 "reset": true, 00:14:53.442 "nvme_admin": false, 00:14:53.442 "nvme_io": false, 00:14:53.442 "nvme_io_md": false, 00:14:53.442 "write_zeroes": true, 00:14:53.442 "zcopy": false, 00:14:53.442 "get_zone_info": false, 00:14:53.442 "zone_management": false, 00:14:53.442 "zone_append": false, 00:14:53.442 "compare": false, 00:14:53.442 "compare_and_write": false, 00:14:53.442 "abort": false, 00:14:53.442 "seek_hole": false, 00:14:53.442 "seek_data": false, 00:14:53.442 "copy": false, 00:14:53.442 "nvme_iov_md": false 00:14:53.442 }, 00:14:53.442 "memory_domains": [ 00:14:53.442 { 00:14:53.442 "dma_device_id": "system", 00:14:53.442 "dma_device_type": 1 00:14:53.442 }, 00:14:53.442 { 00:14:53.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.442 "dma_device_type": 2 00:14:53.442 }, 00:14:53.442 { 00:14:53.442 "dma_device_id": "system", 00:14:53.442 "dma_device_type": 1 00:14:53.442 }, 00:14:53.442 { 00:14:53.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.442 "dma_device_type": 2 00:14:53.442 }, 00:14:53.443 { 00:14:53.443 "dma_device_id": "system", 00:14:53.443 "dma_device_type": 1 00:14:53.443 }, 00:14:53.443 { 00:14:53.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.443 "dma_device_type": 2 00:14:53.443 }, 00:14:53.443 { 00:14:53.443 "dma_device_id": "system", 00:14:53.443 "dma_device_type": 1 00:14:53.443 }, 00:14:53.443 { 00:14:53.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.443 "dma_device_type": 2 00:14:53.443 } 00:14:53.443 ], 00:14:53.443 "driver_specific": { 00:14:53.443 "raid": { 00:14:53.443 "uuid": "520ecf1b-a93f-4550-bda9-25ef5f4a0445", 00:14:53.443 "strip_size_kb": 0, 00:14:53.443 "state": "online", 00:14:53.443 "raid_level": "raid1", 00:14:53.443 "superblock": false, 00:14:53.443 "num_base_bdevs": 4, 00:14:53.443 "num_base_bdevs_discovered": 4, 00:14:53.443 "num_base_bdevs_operational": 4, 00:14:53.443 "base_bdevs_list": [ 00:14:53.443 { 00:14:53.443 "name": "NewBaseBdev", 00:14:53.443 "uuid": "9dda56fb-63c4-41ab-af15-61fc2b66e0e6", 00:14:53.443 "is_configured": true, 00:14:53.443 "data_offset": 0, 00:14:53.443 "data_size": 65536 00:14:53.443 }, 00:14:53.443 { 00:14:53.443 "name": "BaseBdev2", 00:14:53.443 "uuid": "7e91a0d8-00f4-4c65-9c58-a40890c42995", 00:14:53.443 "is_configured": true, 00:14:53.443 "data_offset": 0, 00:14:53.443 "data_size": 65536 00:14:53.443 }, 00:14:53.443 { 00:14:53.443 "name": "BaseBdev3", 00:14:53.443 "uuid": "1c037aaa-dcbf-4f60-b0f5-81e3ae41d3d4", 00:14:53.443 "is_configured": true, 00:14:53.443 "data_offset": 0, 00:14:53.443 "data_size": 65536 00:14:53.443 }, 00:14:53.443 { 00:14:53.443 "name": "BaseBdev4", 00:14:53.443 "uuid": "1560bd1e-24f4-40cd-ad76-f1049169c10c", 00:14:53.443 "is_configured": true, 00:14:53.443 "data_offset": 0, 00:14:53.443 "data_size": 65536 00:14:53.443 } 00:14:53.443 ] 00:14:53.443 } 00:14:53.443 } 00:14:53.443 }' 00:14:53.443 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.443 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:53.443 BaseBdev2 00:14:53.443 BaseBdev3 00:14:53.443 BaseBdev4' 00:14:53.443 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.443 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.703 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.704 [2024-11-26 06:24:37.786824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.704 [2024-11-26 06:24:37.786951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.704 [2024-11-26 06:24:37.787103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.704 [2024-11-26 06:24:37.787446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.704 [2024-11-26 06:24:37.787504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73691 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73691 ']' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73691 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73691 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.704 killing process with pid 73691 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73691' 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73691 00:14:53.704 [2024-11-26 06:24:37.834563] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.704 06:24:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73691 00:14:54.273 [2024-11-26 06:24:38.255272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.653 00:14:55.653 real 0m12.144s 00:14:55.653 user 0m19.213s 00:14:55.653 sys 0m2.240s 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.653 ************************************ 00:14:55.653 END TEST raid_state_function_test 00:14:55.653 ************************************ 00:14:55.653 06:24:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:55.653 06:24:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:55.653 06:24:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.653 06:24:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.653 ************************************ 00:14:55.653 START TEST raid_state_function_test_sb 00:14:55.653 ************************************ 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74368 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74368' 00:14:55.653 Process raid pid: 74368 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74368 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74368 ']' 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.653 06:24:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.653 [2024-11-26 06:24:39.603149] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:14:55.653 [2024-11-26 06:24:39.603405] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.914 [2024-11-26 06:24:39.789389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.914 [2024-11-26 06:24:39.934006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.174 [2024-11-26 06:24:40.160261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.174 [2024-11-26 06:24:40.160415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.434 [2024-11-26 06:24:40.505138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.434 [2024-11-26 06:24:40.505266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.434 [2024-11-26 06:24:40.505297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.434 [2024-11-26 06:24:40.505320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.434 [2024-11-26 06:24:40.505338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.434 [2024-11-26 06:24:40.505380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.434 [2024-11-26 06:24:40.505423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:56.434 [2024-11-26 06:24:40.505446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.434 "name": "Existed_Raid", 00:14:56.434 "uuid": "4fbe5d79-75e2-4f84-b928-a6637e79cf72", 00:14:56.434 "strip_size_kb": 0, 00:14:56.434 "state": "configuring", 00:14:56.434 "raid_level": "raid1", 00:14:56.434 "superblock": true, 00:14:56.434 "num_base_bdevs": 4, 00:14:56.434 "num_base_bdevs_discovered": 0, 00:14:56.434 "num_base_bdevs_operational": 4, 00:14:56.434 "base_bdevs_list": [ 00:14:56.434 { 00:14:56.434 "name": "BaseBdev1", 00:14:56.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.434 "is_configured": false, 00:14:56.434 "data_offset": 0, 00:14:56.434 "data_size": 0 00:14:56.434 }, 00:14:56.434 { 00:14:56.434 "name": "BaseBdev2", 00:14:56.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.434 "is_configured": false, 00:14:56.434 "data_offset": 0, 00:14:56.434 "data_size": 0 00:14:56.434 }, 00:14:56.434 { 00:14:56.434 "name": "BaseBdev3", 00:14:56.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.434 "is_configured": false, 00:14:56.434 "data_offset": 0, 00:14:56.434 "data_size": 0 00:14:56.434 }, 00:14:56.434 { 00:14:56.434 "name": "BaseBdev4", 00:14:56.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.434 "is_configured": false, 00:14:56.434 "data_offset": 0, 00:14:56.434 "data_size": 0 00:14:56.434 } 00:14:56.434 ] 00:14:56.434 }' 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.434 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 [2024-11-26 06:24:40.976318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.005 [2024-11-26 06:24:40.976390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 [2024-11-26 06:24:40.988315] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.005 [2024-11-26 06:24:40.988466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.005 [2024-11-26 06:24:40.988498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.005 [2024-11-26 06:24:40.988542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.005 [2024-11-26 06:24:40.988590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.005 [2024-11-26 06:24:40.988628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.005 [2024-11-26 06:24:40.988654] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:57.005 [2024-11-26 06:24:40.988699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.005 06:24:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 [2024-11-26 06:24:41.037561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.005 BaseBdev1 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 [ 00:14:57.005 { 00:14:57.005 "name": "BaseBdev1", 00:14:57.005 "aliases": [ 00:14:57.005 "62b2bae7-c5ed-4eea-b00e-853f1a824582" 00:14:57.005 ], 00:14:57.005 "product_name": "Malloc disk", 00:14:57.005 "block_size": 512, 00:14:57.005 "num_blocks": 65536, 00:14:57.005 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:57.005 "assigned_rate_limits": { 00:14:57.005 "rw_ios_per_sec": 0, 00:14:57.005 "rw_mbytes_per_sec": 0, 00:14:57.005 "r_mbytes_per_sec": 0, 00:14:57.005 "w_mbytes_per_sec": 0 00:14:57.005 }, 00:14:57.005 "claimed": true, 00:14:57.005 "claim_type": "exclusive_write", 00:14:57.005 "zoned": false, 00:14:57.005 "supported_io_types": { 00:14:57.005 "read": true, 00:14:57.005 "write": true, 00:14:57.005 "unmap": true, 00:14:57.005 "flush": true, 00:14:57.005 "reset": true, 00:14:57.005 "nvme_admin": false, 00:14:57.005 "nvme_io": false, 00:14:57.005 "nvme_io_md": false, 00:14:57.005 "write_zeroes": true, 00:14:57.005 "zcopy": true, 00:14:57.005 "get_zone_info": false, 00:14:57.005 "zone_management": false, 00:14:57.005 "zone_append": false, 00:14:57.005 "compare": false, 00:14:57.005 "compare_and_write": false, 00:14:57.005 "abort": true, 00:14:57.005 "seek_hole": false, 00:14:57.005 "seek_data": false, 00:14:57.005 "copy": true, 00:14:57.005 "nvme_iov_md": false 00:14:57.005 }, 00:14:57.005 "memory_domains": [ 00:14:57.005 { 00:14:57.005 "dma_device_id": "system", 00:14:57.005 "dma_device_type": 1 00:14:57.005 }, 00:14:57.005 { 00:14:57.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.005 "dma_device_type": 2 00:14:57.005 } 00:14:57.005 ], 00:14:57.005 "driver_specific": {} 00:14:57.005 } 00:14:57.005 ] 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.005 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.005 "name": "Existed_Raid", 00:14:57.005 "uuid": "90ccb284-b0aa-45d5-8026-e9c542621336", 00:14:57.005 "strip_size_kb": 0, 00:14:57.005 "state": "configuring", 00:14:57.005 "raid_level": "raid1", 00:14:57.005 "superblock": true, 00:14:57.005 "num_base_bdevs": 4, 00:14:57.005 "num_base_bdevs_discovered": 1, 00:14:57.005 "num_base_bdevs_operational": 4, 00:14:57.005 "base_bdevs_list": [ 00:14:57.005 { 00:14:57.005 "name": "BaseBdev1", 00:14:57.005 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:57.005 "is_configured": true, 00:14:57.005 "data_offset": 2048, 00:14:57.005 "data_size": 63488 00:14:57.005 }, 00:14:57.005 { 00:14:57.005 "name": "BaseBdev2", 00:14:57.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.005 "is_configured": false, 00:14:57.005 "data_offset": 0, 00:14:57.005 "data_size": 0 00:14:57.005 }, 00:14:57.005 { 00:14:57.005 "name": "BaseBdev3", 00:14:57.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.005 "is_configured": false, 00:14:57.005 "data_offset": 0, 00:14:57.005 "data_size": 0 00:14:57.005 }, 00:14:57.006 { 00:14:57.006 "name": "BaseBdev4", 00:14:57.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.006 "is_configured": false, 00:14:57.006 "data_offset": 0, 00:14:57.006 "data_size": 0 00:14:57.006 } 00:14:57.006 ] 00:14:57.006 }' 00:14:57.006 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.006 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.661 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.661 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.661 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.661 [2024-11-26 06:24:41.532858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.661 [2024-11-26 06:24:41.532941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:57.661 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.662 [2024-11-26 06:24:41.540895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.662 [2024-11-26 06:24:41.542798] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.662 [2024-11-26 06:24:41.542850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.662 [2024-11-26 06:24:41.542860] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:57.662 [2024-11-26 06:24:41.542871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:57.662 [2024-11-26 06:24:41.542877] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:57.662 [2024-11-26 06:24:41.542886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.662 "name": "Existed_Raid", 00:14:57.662 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:14:57.662 "strip_size_kb": 0, 00:14:57.662 "state": "configuring", 00:14:57.662 "raid_level": "raid1", 00:14:57.662 "superblock": true, 00:14:57.662 "num_base_bdevs": 4, 00:14:57.662 "num_base_bdevs_discovered": 1, 00:14:57.662 "num_base_bdevs_operational": 4, 00:14:57.662 "base_bdevs_list": [ 00:14:57.662 { 00:14:57.662 "name": "BaseBdev1", 00:14:57.662 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:57.662 "is_configured": true, 00:14:57.662 "data_offset": 2048, 00:14:57.662 "data_size": 63488 00:14:57.662 }, 00:14:57.662 { 00:14:57.662 "name": "BaseBdev2", 00:14:57.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.662 "is_configured": false, 00:14:57.662 "data_offset": 0, 00:14:57.662 "data_size": 0 00:14:57.662 }, 00:14:57.662 { 00:14:57.662 "name": "BaseBdev3", 00:14:57.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.662 "is_configured": false, 00:14:57.662 "data_offset": 0, 00:14:57.662 "data_size": 0 00:14:57.662 }, 00:14:57.662 { 00:14:57.662 "name": "BaseBdev4", 00:14:57.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.662 "is_configured": false, 00:14:57.662 "data_offset": 0, 00:14:57.662 "data_size": 0 00:14:57.662 } 00:14:57.662 ] 00:14:57.662 }' 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.662 06:24:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.921 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.921 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.921 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.181 [2024-11-26 06:24:42.065856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.181 BaseBdev2 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.181 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.181 [ 00:14:58.181 { 00:14:58.181 "name": "BaseBdev2", 00:14:58.181 "aliases": [ 00:14:58.181 "29c99c28-7467-4ff6-aa32-2fe9546c74ce" 00:14:58.181 ], 00:14:58.181 "product_name": "Malloc disk", 00:14:58.181 "block_size": 512, 00:14:58.181 "num_blocks": 65536, 00:14:58.181 "uuid": "29c99c28-7467-4ff6-aa32-2fe9546c74ce", 00:14:58.181 "assigned_rate_limits": { 00:14:58.181 "rw_ios_per_sec": 0, 00:14:58.181 "rw_mbytes_per_sec": 0, 00:14:58.182 "r_mbytes_per_sec": 0, 00:14:58.182 "w_mbytes_per_sec": 0 00:14:58.182 }, 00:14:58.182 "claimed": true, 00:14:58.182 "claim_type": "exclusive_write", 00:14:58.182 "zoned": false, 00:14:58.182 "supported_io_types": { 00:14:58.182 "read": true, 00:14:58.182 "write": true, 00:14:58.182 "unmap": true, 00:14:58.182 "flush": true, 00:14:58.182 "reset": true, 00:14:58.182 "nvme_admin": false, 00:14:58.182 "nvme_io": false, 00:14:58.182 "nvme_io_md": false, 00:14:58.182 "write_zeroes": true, 00:14:58.182 "zcopy": true, 00:14:58.182 "get_zone_info": false, 00:14:58.182 "zone_management": false, 00:14:58.182 "zone_append": false, 00:14:58.182 "compare": false, 00:14:58.182 "compare_and_write": false, 00:14:58.182 "abort": true, 00:14:58.182 "seek_hole": false, 00:14:58.182 "seek_data": false, 00:14:58.182 "copy": true, 00:14:58.182 "nvme_iov_md": false 00:14:58.182 }, 00:14:58.182 "memory_domains": [ 00:14:58.182 { 00:14:58.182 "dma_device_id": "system", 00:14:58.182 "dma_device_type": 1 00:14:58.182 }, 00:14:58.182 { 00:14:58.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.182 "dma_device_type": 2 00:14:58.182 } 00:14:58.182 ], 00:14:58.182 "driver_specific": {} 00:14:58.182 } 00:14:58.182 ] 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.182 "name": "Existed_Raid", 00:14:58.182 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:14:58.182 "strip_size_kb": 0, 00:14:58.182 "state": "configuring", 00:14:58.182 "raid_level": "raid1", 00:14:58.182 "superblock": true, 00:14:58.182 "num_base_bdevs": 4, 00:14:58.182 "num_base_bdevs_discovered": 2, 00:14:58.182 "num_base_bdevs_operational": 4, 00:14:58.182 "base_bdevs_list": [ 00:14:58.182 { 00:14:58.182 "name": "BaseBdev1", 00:14:58.182 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:58.182 "is_configured": true, 00:14:58.182 "data_offset": 2048, 00:14:58.182 "data_size": 63488 00:14:58.182 }, 00:14:58.182 { 00:14:58.182 "name": "BaseBdev2", 00:14:58.182 "uuid": "29c99c28-7467-4ff6-aa32-2fe9546c74ce", 00:14:58.182 "is_configured": true, 00:14:58.182 "data_offset": 2048, 00:14:58.182 "data_size": 63488 00:14:58.182 }, 00:14:58.182 { 00:14:58.182 "name": "BaseBdev3", 00:14:58.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.182 "is_configured": false, 00:14:58.182 "data_offset": 0, 00:14:58.182 "data_size": 0 00:14:58.182 }, 00:14:58.182 { 00:14:58.182 "name": "BaseBdev4", 00:14:58.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.182 "is_configured": false, 00:14:58.182 "data_offset": 0, 00:14:58.182 "data_size": 0 00:14:58.182 } 00:14:58.182 ] 00:14:58.182 }' 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.182 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.752 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.752 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.752 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.752 [2024-11-26 06:24:42.657736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.752 BaseBdev3 00:14:58.752 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.752 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.753 [ 00:14:58.753 { 00:14:58.753 "name": "BaseBdev3", 00:14:58.753 "aliases": [ 00:14:58.753 "d8d330e2-a33d-48f2-b1d4-722b2781c646" 00:14:58.753 ], 00:14:58.753 "product_name": "Malloc disk", 00:14:58.753 "block_size": 512, 00:14:58.753 "num_blocks": 65536, 00:14:58.753 "uuid": "d8d330e2-a33d-48f2-b1d4-722b2781c646", 00:14:58.753 "assigned_rate_limits": { 00:14:58.753 "rw_ios_per_sec": 0, 00:14:58.753 "rw_mbytes_per_sec": 0, 00:14:58.753 "r_mbytes_per_sec": 0, 00:14:58.753 "w_mbytes_per_sec": 0 00:14:58.753 }, 00:14:58.753 "claimed": true, 00:14:58.753 "claim_type": "exclusive_write", 00:14:58.753 "zoned": false, 00:14:58.753 "supported_io_types": { 00:14:58.753 "read": true, 00:14:58.753 "write": true, 00:14:58.753 "unmap": true, 00:14:58.753 "flush": true, 00:14:58.753 "reset": true, 00:14:58.753 "nvme_admin": false, 00:14:58.753 "nvme_io": false, 00:14:58.753 "nvme_io_md": false, 00:14:58.753 "write_zeroes": true, 00:14:58.753 "zcopy": true, 00:14:58.753 "get_zone_info": false, 00:14:58.753 "zone_management": false, 00:14:58.753 "zone_append": false, 00:14:58.753 "compare": false, 00:14:58.753 "compare_and_write": false, 00:14:58.753 "abort": true, 00:14:58.753 "seek_hole": false, 00:14:58.753 "seek_data": false, 00:14:58.753 "copy": true, 00:14:58.753 "nvme_iov_md": false 00:14:58.753 }, 00:14:58.753 "memory_domains": [ 00:14:58.753 { 00:14:58.753 "dma_device_id": "system", 00:14:58.753 "dma_device_type": 1 00:14:58.753 }, 00:14:58.753 { 00:14:58.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.753 "dma_device_type": 2 00:14:58.753 } 00:14:58.753 ], 00:14:58.753 "driver_specific": {} 00:14:58.753 } 00:14:58.753 ] 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.753 "name": "Existed_Raid", 00:14:58.753 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:14:58.753 "strip_size_kb": 0, 00:14:58.753 "state": "configuring", 00:14:58.753 "raid_level": "raid1", 00:14:58.753 "superblock": true, 00:14:58.753 "num_base_bdevs": 4, 00:14:58.753 "num_base_bdevs_discovered": 3, 00:14:58.753 "num_base_bdevs_operational": 4, 00:14:58.753 "base_bdevs_list": [ 00:14:58.753 { 00:14:58.753 "name": "BaseBdev1", 00:14:58.753 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:58.753 "is_configured": true, 00:14:58.753 "data_offset": 2048, 00:14:58.753 "data_size": 63488 00:14:58.753 }, 00:14:58.753 { 00:14:58.753 "name": "BaseBdev2", 00:14:58.753 "uuid": "29c99c28-7467-4ff6-aa32-2fe9546c74ce", 00:14:58.753 "is_configured": true, 00:14:58.753 "data_offset": 2048, 00:14:58.753 "data_size": 63488 00:14:58.753 }, 00:14:58.753 { 00:14:58.753 "name": "BaseBdev3", 00:14:58.753 "uuid": "d8d330e2-a33d-48f2-b1d4-722b2781c646", 00:14:58.753 "is_configured": true, 00:14:58.753 "data_offset": 2048, 00:14:58.753 "data_size": 63488 00:14:58.753 }, 00:14:58.753 { 00:14:58.753 "name": "BaseBdev4", 00:14:58.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.753 "is_configured": false, 00:14:58.753 "data_offset": 0, 00:14:58.753 "data_size": 0 00:14:58.753 } 00:14:58.753 ] 00:14:58.753 }' 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.753 06:24:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.013 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:59.013 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.013 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.273 [2024-11-26 06:24:43.170766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.273 [2024-11-26 06:24:43.171084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:59.273 [2024-11-26 06:24:43.171101] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.273 [2024-11-26 06:24:43.171376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:59.273 [2024-11-26 06:24:43.171539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:59.273 [2024-11-26 06:24:43.171554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:59.273 BaseBdev4 00:14:59.273 [2024-11-26 06:24:43.171745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.273 [ 00:14:59.273 { 00:14:59.273 "name": "BaseBdev4", 00:14:59.273 "aliases": [ 00:14:59.273 "84f1c861-5907-4637-a539-c364f0f20e78" 00:14:59.273 ], 00:14:59.273 "product_name": "Malloc disk", 00:14:59.273 "block_size": 512, 00:14:59.273 "num_blocks": 65536, 00:14:59.273 "uuid": "84f1c861-5907-4637-a539-c364f0f20e78", 00:14:59.273 "assigned_rate_limits": { 00:14:59.273 "rw_ios_per_sec": 0, 00:14:59.273 "rw_mbytes_per_sec": 0, 00:14:59.273 "r_mbytes_per_sec": 0, 00:14:59.273 "w_mbytes_per_sec": 0 00:14:59.273 }, 00:14:59.273 "claimed": true, 00:14:59.273 "claim_type": "exclusive_write", 00:14:59.273 "zoned": false, 00:14:59.273 "supported_io_types": { 00:14:59.273 "read": true, 00:14:59.273 "write": true, 00:14:59.273 "unmap": true, 00:14:59.273 "flush": true, 00:14:59.273 "reset": true, 00:14:59.273 "nvme_admin": false, 00:14:59.273 "nvme_io": false, 00:14:59.273 "nvme_io_md": false, 00:14:59.273 "write_zeroes": true, 00:14:59.273 "zcopy": true, 00:14:59.273 "get_zone_info": false, 00:14:59.273 "zone_management": false, 00:14:59.273 "zone_append": false, 00:14:59.273 "compare": false, 00:14:59.273 "compare_and_write": false, 00:14:59.273 "abort": true, 00:14:59.273 "seek_hole": false, 00:14:59.273 "seek_data": false, 00:14:59.273 "copy": true, 00:14:59.273 "nvme_iov_md": false 00:14:59.273 }, 00:14:59.273 "memory_domains": [ 00:14:59.273 { 00:14:59.273 "dma_device_id": "system", 00:14:59.273 "dma_device_type": 1 00:14:59.273 }, 00:14:59.273 { 00:14:59.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.273 "dma_device_type": 2 00:14:59.273 } 00:14:59.273 ], 00:14:59.273 "driver_specific": {} 00:14:59.273 } 00:14:59.273 ] 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.273 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.274 "name": "Existed_Raid", 00:14:59.274 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:14:59.274 "strip_size_kb": 0, 00:14:59.274 "state": "online", 00:14:59.274 "raid_level": "raid1", 00:14:59.274 "superblock": true, 00:14:59.274 "num_base_bdevs": 4, 00:14:59.274 "num_base_bdevs_discovered": 4, 00:14:59.274 "num_base_bdevs_operational": 4, 00:14:59.274 "base_bdevs_list": [ 00:14:59.274 { 00:14:59.274 "name": "BaseBdev1", 00:14:59.274 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:59.274 "is_configured": true, 00:14:59.274 "data_offset": 2048, 00:14:59.274 "data_size": 63488 00:14:59.274 }, 00:14:59.274 { 00:14:59.274 "name": "BaseBdev2", 00:14:59.274 "uuid": "29c99c28-7467-4ff6-aa32-2fe9546c74ce", 00:14:59.274 "is_configured": true, 00:14:59.274 "data_offset": 2048, 00:14:59.274 "data_size": 63488 00:14:59.274 }, 00:14:59.274 { 00:14:59.274 "name": "BaseBdev3", 00:14:59.274 "uuid": "d8d330e2-a33d-48f2-b1d4-722b2781c646", 00:14:59.274 "is_configured": true, 00:14:59.274 "data_offset": 2048, 00:14:59.274 "data_size": 63488 00:14:59.274 }, 00:14:59.274 { 00:14:59.274 "name": "BaseBdev4", 00:14:59.274 "uuid": "84f1c861-5907-4637-a539-c364f0f20e78", 00:14:59.274 "is_configured": true, 00:14:59.274 "data_offset": 2048, 00:14:59.274 "data_size": 63488 00:14:59.274 } 00:14:59.274 ] 00:14:59.274 }' 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.274 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.533 [2024-11-26 06:24:43.618446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.533 "name": "Existed_Raid", 00:14:59.533 "aliases": [ 00:14:59.533 "d1120723-c9a8-4069-a2e4-e053b4b5a6f1" 00:14:59.533 ], 00:14:59.533 "product_name": "Raid Volume", 00:14:59.533 "block_size": 512, 00:14:59.533 "num_blocks": 63488, 00:14:59.533 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:14:59.533 "assigned_rate_limits": { 00:14:59.533 "rw_ios_per_sec": 0, 00:14:59.533 "rw_mbytes_per_sec": 0, 00:14:59.533 "r_mbytes_per_sec": 0, 00:14:59.533 "w_mbytes_per_sec": 0 00:14:59.533 }, 00:14:59.533 "claimed": false, 00:14:59.533 "zoned": false, 00:14:59.533 "supported_io_types": { 00:14:59.533 "read": true, 00:14:59.533 "write": true, 00:14:59.533 "unmap": false, 00:14:59.533 "flush": false, 00:14:59.533 "reset": true, 00:14:59.533 "nvme_admin": false, 00:14:59.533 "nvme_io": false, 00:14:59.533 "nvme_io_md": false, 00:14:59.533 "write_zeroes": true, 00:14:59.533 "zcopy": false, 00:14:59.533 "get_zone_info": false, 00:14:59.533 "zone_management": false, 00:14:59.533 "zone_append": false, 00:14:59.533 "compare": false, 00:14:59.533 "compare_and_write": false, 00:14:59.533 "abort": false, 00:14:59.533 "seek_hole": false, 00:14:59.533 "seek_data": false, 00:14:59.533 "copy": false, 00:14:59.533 "nvme_iov_md": false 00:14:59.533 }, 00:14:59.533 "memory_domains": [ 00:14:59.533 { 00:14:59.533 "dma_device_id": "system", 00:14:59.533 "dma_device_type": 1 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.533 "dma_device_type": 2 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "system", 00:14:59.533 "dma_device_type": 1 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.533 "dma_device_type": 2 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "system", 00:14:59.533 "dma_device_type": 1 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.533 "dma_device_type": 2 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "system", 00:14:59.533 "dma_device_type": 1 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.533 "dma_device_type": 2 00:14:59.533 } 00:14:59.533 ], 00:14:59.533 "driver_specific": { 00:14:59.533 "raid": { 00:14:59.533 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:14:59.533 "strip_size_kb": 0, 00:14:59.533 "state": "online", 00:14:59.533 "raid_level": "raid1", 00:14:59.533 "superblock": true, 00:14:59.533 "num_base_bdevs": 4, 00:14:59.533 "num_base_bdevs_discovered": 4, 00:14:59.533 "num_base_bdevs_operational": 4, 00:14:59.533 "base_bdevs_list": [ 00:14:59.533 { 00:14:59.533 "name": "BaseBdev1", 00:14:59.533 "uuid": "62b2bae7-c5ed-4eea-b00e-853f1a824582", 00:14:59.533 "is_configured": true, 00:14:59.533 "data_offset": 2048, 00:14:59.533 "data_size": 63488 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "name": "BaseBdev2", 00:14:59.533 "uuid": "29c99c28-7467-4ff6-aa32-2fe9546c74ce", 00:14:59.533 "is_configured": true, 00:14:59.533 "data_offset": 2048, 00:14:59.533 "data_size": 63488 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "name": "BaseBdev3", 00:14:59.533 "uuid": "d8d330e2-a33d-48f2-b1d4-722b2781c646", 00:14:59.533 "is_configured": true, 00:14:59.533 "data_offset": 2048, 00:14:59.533 "data_size": 63488 00:14:59.533 }, 00:14:59.533 { 00:14:59.533 "name": "BaseBdev4", 00:14:59.533 "uuid": "84f1c861-5907-4637-a539-c364f0f20e78", 00:14:59.533 "is_configured": true, 00:14:59.533 "data_offset": 2048, 00:14:59.533 "data_size": 63488 00:14:59.533 } 00:14:59.533 ] 00:14:59.533 } 00:14:59.533 } 00:14:59.533 }' 00:14:59.533 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:59.792 BaseBdev2 00:14:59.792 BaseBdev3 00:14:59.792 BaseBdev4' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.792 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.793 06:24:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.793 [2024-11-26 06:24:43.893885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.052 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.052 "name": "Existed_Raid", 00:15:00.052 "uuid": "d1120723-c9a8-4069-a2e4-e053b4b5a6f1", 00:15:00.052 "strip_size_kb": 0, 00:15:00.052 "state": "online", 00:15:00.052 "raid_level": "raid1", 00:15:00.052 "superblock": true, 00:15:00.052 "num_base_bdevs": 4, 00:15:00.052 "num_base_bdevs_discovered": 3, 00:15:00.052 "num_base_bdevs_operational": 3, 00:15:00.052 "base_bdevs_list": [ 00:15:00.052 { 00:15:00.053 "name": null, 00:15:00.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.053 "is_configured": false, 00:15:00.053 "data_offset": 0, 00:15:00.053 "data_size": 63488 00:15:00.053 }, 00:15:00.053 { 00:15:00.053 "name": "BaseBdev2", 00:15:00.053 "uuid": "29c99c28-7467-4ff6-aa32-2fe9546c74ce", 00:15:00.053 "is_configured": true, 00:15:00.053 "data_offset": 2048, 00:15:00.053 "data_size": 63488 00:15:00.053 }, 00:15:00.053 { 00:15:00.053 "name": "BaseBdev3", 00:15:00.053 "uuid": "d8d330e2-a33d-48f2-b1d4-722b2781c646", 00:15:00.053 "is_configured": true, 00:15:00.053 "data_offset": 2048, 00:15:00.053 "data_size": 63488 00:15:00.053 }, 00:15:00.053 { 00:15:00.053 "name": "BaseBdev4", 00:15:00.053 "uuid": "84f1c861-5907-4637-a539-c364f0f20e78", 00:15:00.053 "is_configured": true, 00:15:00.053 "data_offset": 2048, 00:15:00.053 "data_size": 63488 00:15:00.053 } 00:15:00.053 ] 00:15:00.053 }' 00:15:00.053 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.053 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.312 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.570 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.570 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.570 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:00.570 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.570 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.571 [2024-11-26 06:24:44.451715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.571 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.571 [2024-11-26 06:24:44.608405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 [2024-11-26 06:24:44.762328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:00.899 [2024-11-26 06:24:44.762549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.899 [2024-11-26 06:24:44.860632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.899 [2024-11-26 06:24:44.860707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.899 [2024-11-26 06:24:44.860723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 BaseBdev2 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.899 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 [ 00:15:00.899 { 00:15:00.899 "name": "BaseBdev2", 00:15:00.899 "aliases": [ 00:15:00.899 "0f685c07-31b4-41ff-a0e7-2ba406f48196" 00:15:00.899 ], 00:15:00.899 "product_name": "Malloc disk", 00:15:00.899 "block_size": 512, 00:15:00.899 "num_blocks": 65536, 00:15:00.899 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:00.899 "assigned_rate_limits": { 00:15:00.899 "rw_ios_per_sec": 0, 00:15:00.899 "rw_mbytes_per_sec": 0, 00:15:00.899 "r_mbytes_per_sec": 0, 00:15:00.899 "w_mbytes_per_sec": 0 00:15:00.899 }, 00:15:00.899 "claimed": false, 00:15:00.899 "zoned": false, 00:15:00.899 "supported_io_types": { 00:15:00.899 "read": true, 00:15:00.899 "write": true, 00:15:00.899 "unmap": true, 00:15:00.899 "flush": true, 00:15:00.899 "reset": true, 00:15:00.900 "nvme_admin": false, 00:15:00.900 "nvme_io": false, 00:15:00.900 "nvme_io_md": false, 00:15:00.900 "write_zeroes": true, 00:15:00.900 "zcopy": true, 00:15:00.900 "get_zone_info": false, 00:15:00.900 "zone_management": false, 00:15:00.900 "zone_append": false, 00:15:00.900 "compare": false, 00:15:00.900 "compare_and_write": false, 00:15:00.900 "abort": true, 00:15:00.900 "seek_hole": false, 00:15:00.900 "seek_data": false, 00:15:00.900 "copy": true, 00:15:00.900 "nvme_iov_md": false 00:15:00.900 }, 00:15:00.900 "memory_domains": [ 00:15:00.900 { 00:15:00.900 "dma_device_id": "system", 00:15:00.900 "dma_device_type": 1 00:15:00.900 }, 00:15:00.900 { 00:15:00.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.900 "dma_device_type": 2 00:15:00.900 } 00:15:00.900 ], 00:15:00.900 "driver_specific": {} 00:15:00.900 } 00:15:00.900 ] 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.900 06:24:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.159 BaseBdev3 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.159 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.159 [ 00:15:01.159 { 00:15:01.159 "name": "BaseBdev3", 00:15:01.159 "aliases": [ 00:15:01.159 "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f" 00:15:01.159 ], 00:15:01.159 "product_name": "Malloc disk", 00:15:01.159 "block_size": 512, 00:15:01.159 "num_blocks": 65536, 00:15:01.159 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:01.159 "assigned_rate_limits": { 00:15:01.159 "rw_ios_per_sec": 0, 00:15:01.159 "rw_mbytes_per_sec": 0, 00:15:01.159 "r_mbytes_per_sec": 0, 00:15:01.159 "w_mbytes_per_sec": 0 00:15:01.159 }, 00:15:01.159 "claimed": false, 00:15:01.159 "zoned": false, 00:15:01.159 "supported_io_types": { 00:15:01.159 "read": true, 00:15:01.159 "write": true, 00:15:01.159 "unmap": true, 00:15:01.159 "flush": true, 00:15:01.159 "reset": true, 00:15:01.159 "nvme_admin": false, 00:15:01.159 "nvme_io": false, 00:15:01.159 "nvme_io_md": false, 00:15:01.159 "write_zeroes": true, 00:15:01.159 "zcopy": true, 00:15:01.159 "get_zone_info": false, 00:15:01.159 "zone_management": false, 00:15:01.159 "zone_append": false, 00:15:01.159 "compare": false, 00:15:01.160 "compare_and_write": false, 00:15:01.160 "abort": true, 00:15:01.160 "seek_hole": false, 00:15:01.160 "seek_data": false, 00:15:01.160 "copy": true, 00:15:01.160 "nvme_iov_md": false 00:15:01.160 }, 00:15:01.160 "memory_domains": [ 00:15:01.160 { 00:15:01.160 "dma_device_id": "system", 00:15:01.160 "dma_device_type": 1 00:15:01.160 }, 00:15:01.160 { 00:15:01.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.160 "dma_device_type": 2 00:15:01.160 } 00:15:01.160 ], 00:15:01.160 "driver_specific": {} 00:15:01.160 } 00:15:01.160 ] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 BaseBdev4 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 [ 00:15:01.160 { 00:15:01.160 "name": "BaseBdev4", 00:15:01.160 "aliases": [ 00:15:01.160 "63fe2354-6d29-4493-aecc-901bfa71b77e" 00:15:01.160 ], 00:15:01.160 "product_name": "Malloc disk", 00:15:01.160 "block_size": 512, 00:15:01.160 "num_blocks": 65536, 00:15:01.160 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:01.160 "assigned_rate_limits": { 00:15:01.160 "rw_ios_per_sec": 0, 00:15:01.160 "rw_mbytes_per_sec": 0, 00:15:01.160 "r_mbytes_per_sec": 0, 00:15:01.160 "w_mbytes_per_sec": 0 00:15:01.160 }, 00:15:01.160 "claimed": false, 00:15:01.160 "zoned": false, 00:15:01.160 "supported_io_types": { 00:15:01.160 "read": true, 00:15:01.160 "write": true, 00:15:01.160 "unmap": true, 00:15:01.160 "flush": true, 00:15:01.160 "reset": true, 00:15:01.160 "nvme_admin": false, 00:15:01.160 "nvme_io": false, 00:15:01.160 "nvme_io_md": false, 00:15:01.160 "write_zeroes": true, 00:15:01.160 "zcopy": true, 00:15:01.160 "get_zone_info": false, 00:15:01.160 "zone_management": false, 00:15:01.160 "zone_append": false, 00:15:01.160 "compare": false, 00:15:01.160 "compare_and_write": false, 00:15:01.160 "abort": true, 00:15:01.160 "seek_hole": false, 00:15:01.160 "seek_data": false, 00:15:01.160 "copy": true, 00:15:01.160 "nvme_iov_md": false 00:15:01.160 }, 00:15:01.160 "memory_domains": [ 00:15:01.160 { 00:15:01.160 "dma_device_id": "system", 00:15:01.160 "dma_device_type": 1 00:15:01.160 }, 00:15:01.160 { 00:15:01.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.160 "dma_device_type": 2 00:15:01.160 } 00:15:01.160 ], 00:15:01.160 "driver_specific": {} 00:15:01.160 } 00:15:01.160 ] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 [2024-11-26 06:24:45.119930] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.160 [2024-11-26 06:24:45.120132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.160 [2024-11-26 06:24:45.120216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.160 [2024-11-26 06:24:45.122738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.160 [2024-11-26 06:24:45.122866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.160 "name": "Existed_Raid", 00:15:01.160 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:01.160 "strip_size_kb": 0, 00:15:01.160 "state": "configuring", 00:15:01.160 "raid_level": "raid1", 00:15:01.160 "superblock": true, 00:15:01.160 "num_base_bdevs": 4, 00:15:01.160 "num_base_bdevs_discovered": 3, 00:15:01.160 "num_base_bdevs_operational": 4, 00:15:01.160 "base_bdevs_list": [ 00:15:01.160 { 00:15:01.160 "name": "BaseBdev1", 00:15:01.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.160 "is_configured": false, 00:15:01.160 "data_offset": 0, 00:15:01.160 "data_size": 0 00:15:01.160 }, 00:15:01.160 { 00:15:01.160 "name": "BaseBdev2", 00:15:01.160 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:01.160 "is_configured": true, 00:15:01.160 "data_offset": 2048, 00:15:01.160 "data_size": 63488 00:15:01.160 }, 00:15:01.160 { 00:15:01.160 "name": "BaseBdev3", 00:15:01.160 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:01.160 "is_configured": true, 00:15:01.160 "data_offset": 2048, 00:15:01.160 "data_size": 63488 00:15:01.160 }, 00:15:01.160 { 00:15:01.160 "name": "BaseBdev4", 00:15:01.160 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:01.160 "is_configured": true, 00:15:01.160 "data_offset": 2048, 00:15:01.160 "data_size": 63488 00:15:01.160 } 00:15:01.160 ] 00:15:01.160 }' 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.160 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.420 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:01.420 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.420 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.420 [2024-11-26 06:24:45.535266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.421 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.681 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.681 "name": "Existed_Raid", 00:15:01.681 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:01.681 "strip_size_kb": 0, 00:15:01.681 "state": "configuring", 00:15:01.681 "raid_level": "raid1", 00:15:01.681 "superblock": true, 00:15:01.681 "num_base_bdevs": 4, 00:15:01.681 "num_base_bdevs_discovered": 2, 00:15:01.681 "num_base_bdevs_operational": 4, 00:15:01.681 "base_bdevs_list": [ 00:15:01.681 { 00:15:01.681 "name": "BaseBdev1", 00:15:01.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.681 "is_configured": false, 00:15:01.681 "data_offset": 0, 00:15:01.681 "data_size": 0 00:15:01.681 }, 00:15:01.681 { 00:15:01.681 "name": null, 00:15:01.681 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:01.681 "is_configured": false, 00:15:01.681 "data_offset": 0, 00:15:01.681 "data_size": 63488 00:15:01.681 }, 00:15:01.681 { 00:15:01.681 "name": "BaseBdev3", 00:15:01.681 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:01.681 "is_configured": true, 00:15:01.681 "data_offset": 2048, 00:15:01.681 "data_size": 63488 00:15:01.681 }, 00:15:01.681 { 00:15:01.681 "name": "BaseBdev4", 00:15:01.681 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:01.681 "is_configured": true, 00:15:01.681 "data_offset": 2048, 00:15:01.681 "data_size": 63488 00:15:01.681 } 00:15:01.681 ] 00:15:01.681 }' 00:15:01.681 06:24:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.681 06:24:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.941 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.200 [2024-11-26 06:24:46.099289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.200 BaseBdev1 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.200 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.200 [ 00:15:02.200 { 00:15:02.200 "name": "BaseBdev1", 00:15:02.200 "aliases": [ 00:15:02.200 "1ddcb428-7ebc-4bee-a878-73c1f43f9b83" 00:15:02.200 ], 00:15:02.200 "product_name": "Malloc disk", 00:15:02.200 "block_size": 512, 00:15:02.200 "num_blocks": 65536, 00:15:02.200 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:02.200 "assigned_rate_limits": { 00:15:02.200 "rw_ios_per_sec": 0, 00:15:02.200 "rw_mbytes_per_sec": 0, 00:15:02.200 "r_mbytes_per_sec": 0, 00:15:02.200 "w_mbytes_per_sec": 0 00:15:02.200 }, 00:15:02.200 "claimed": true, 00:15:02.200 "claim_type": "exclusive_write", 00:15:02.200 "zoned": false, 00:15:02.200 "supported_io_types": { 00:15:02.200 "read": true, 00:15:02.200 "write": true, 00:15:02.200 "unmap": true, 00:15:02.200 "flush": true, 00:15:02.200 "reset": true, 00:15:02.200 "nvme_admin": false, 00:15:02.200 "nvme_io": false, 00:15:02.200 "nvme_io_md": false, 00:15:02.200 "write_zeroes": true, 00:15:02.200 "zcopy": true, 00:15:02.200 "get_zone_info": false, 00:15:02.200 "zone_management": false, 00:15:02.200 "zone_append": false, 00:15:02.200 "compare": false, 00:15:02.200 "compare_and_write": false, 00:15:02.200 "abort": true, 00:15:02.200 "seek_hole": false, 00:15:02.200 "seek_data": false, 00:15:02.200 "copy": true, 00:15:02.200 "nvme_iov_md": false 00:15:02.200 }, 00:15:02.200 "memory_domains": [ 00:15:02.201 { 00:15:02.201 "dma_device_id": "system", 00:15:02.201 "dma_device_type": 1 00:15:02.201 }, 00:15:02.201 { 00:15:02.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.201 "dma_device_type": 2 00:15:02.201 } 00:15:02.201 ], 00:15:02.201 "driver_specific": {} 00:15:02.201 } 00:15:02.201 ] 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.201 "name": "Existed_Raid", 00:15:02.201 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:02.201 "strip_size_kb": 0, 00:15:02.201 "state": "configuring", 00:15:02.201 "raid_level": "raid1", 00:15:02.201 "superblock": true, 00:15:02.201 "num_base_bdevs": 4, 00:15:02.201 "num_base_bdevs_discovered": 3, 00:15:02.201 "num_base_bdevs_operational": 4, 00:15:02.201 "base_bdevs_list": [ 00:15:02.201 { 00:15:02.201 "name": "BaseBdev1", 00:15:02.201 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:02.201 "is_configured": true, 00:15:02.201 "data_offset": 2048, 00:15:02.201 "data_size": 63488 00:15:02.201 }, 00:15:02.201 { 00:15:02.201 "name": null, 00:15:02.201 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:02.201 "is_configured": false, 00:15:02.201 "data_offset": 0, 00:15:02.201 "data_size": 63488 00:15:02.201 }, 00:15:02.201 { 00:15:02.201 "name": "BaseBdev3", 00:15:02.201 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:02.201 "is_configured": true, 00:15:02.201 "data_offset": 2048, 00:15:02.201 "data_size": 63488 00:15:02.201 }, 00:15:02.201 { 00:15:02.201 "name": "BaseBdev4", 00:15:02.201 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:02.201 "is_configured": true, 00:15:02.201 "data_offset": 2048, 00:15:02.201 "data_size": 63488 00:15:02.201 } 00:15:02.201 ] 00:15:02.201 }' 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.201 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.461 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.461 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.461 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.461 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.721 [2024-11-26 06:24:46.634679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.721 "name": "Existed_Raid", 00:15:02.721 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:02.721 "strip_size_kb": 0, 00:15:02.721 "state": "configuring", 00:15:02.721 "raid_level": "raid1", 00:15:02.721 "superblock": true, 00:15:02.721 "num_base_bdevs": 4, 00:15:02.721 "num_base_bdevs_discovered": 2, 00:15:02.721 "num_base_bdevs_operational": 4, 00:15:02.721 "base_bdevs_list": [ 00:15:02.721 { 00:15:02.721 "name": "BaseBdev1", 00:15:02.721 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:02.721 "is_configured": true, 00:15:02.721 "data_offset": 2048, 00:15:02.721 "data_size": 63488 00:15:02.721 }, 00:15:02.721 { 00:15:02.721 "name": null, 00:15:02.721 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:02.721 "is_configured": false, 00:15:02.721 "data_offset": 0, 00:15:02.721 "data_size": 63488 00:15:02.721 }, 00:15:02.721 { 00:15:02.721 "name": null, 00:15:02.721 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:02.721 "is_configured": false, 00:15:02.721 "data_offset": 0, 00:15:02.721 "data_size": 63488 00:15:02.721 }, 00:15:02.721 { 00:15:02.721 "name": "BaseBdev4", 00:15:02.721 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:02.721 "is_configured": true, 00:15:02.721 "data_offset": 2048, 00:15:02.721 "data_size": 63488 00:15:02.721 } 00:15:02.721 ] 00:15:02.721 }' 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.721 06:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.291 [2024-11-26 06:24:47.189792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.291 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.291 "name": "Existed_Raid", 00:15:03.291 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:03.291 "strip_size_kb": 0, 00:15:03.291 "state": "configuring", 00:15:03.291 "raid_level": "raid1", 00:15:03.291 "superblock": true, 00:15:03.291 "num_base_bdevs": 4, 00:15:03.291 "num_base_bdevs_discovered": 3, 00:15:03.291 "num_base_bdevs_operational": 4, 00:15:03.291 "base_bdevs_list": [ 00:15:03.291 { 00:15:03.291 "name": "BaseBdev1", 00:15:03.291 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:03.291 "is_configured": true, 00:15:03.291 "data_offset": 2048, 00:15:03.291 "data_size": 63488 00:15:03.291 }, 00:15:03.291 { 00:15:03.291 "name": null, 00:15:03.291 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:03.291 "is_configured": false, 00:15:03.291 "data_offset": 0, 00:15:03.291 "data_size": 63488 00:15:03.291 }, 00:15:03.291 { 00:15:03.291 "name": "BaseBdev3", 00:15:03.291 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:03.292 "is_configured": true, 00:15:03.292 "data_offset": 2048, 00:15:03.292 "data_size": 63488 00:15:03.292 }, 00:15:03.292 { 00:15:03.292 "name": "BaseBdev4", 00:15:03.292 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:03.292 "is_configured": true, 00:15:03.292 "data_offset": 2048, 00:15:03.292 "data_size": 63488 00:15:03.292 } 00:15:03.292 ] 00:15:03.292 }' 00:15:03.292 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.292 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.551 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.551 [2024-11-26 06:24:47.645098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.811 "name": "Existed_Raid", 00:15:03.811 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:03.811 "strip_size_kb": 0, 00:15:03.811 "state": "configuring", 00:15:03.811 "raid_level": "raid1", 00:15:03.811 "superblock": true, 00:15:03.811 "num_base_bdevs": 4, 00:15:03.811 "num_base_bdevs_discovered": 2, 00:15:03.811 "num_base_bdevs_operational": 4, 00:15:03.811 "base_bdevs_list": [ 00:15:03.811 { 00:15:03.811 "name": null, 00:15:03.811 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:03.811 "is_configured": false, 00:15:03.811 "data_offset": 0, 00:15:03.811 "data_size": 63488 00:15:03.811 }, 00:15:03.811 { 00:15:03.811 "name": null, 00:15:03.811 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:03.811 "is_configured": false, 00:15:03.811 "data_offset": 0, 00:15:03.811 "data_size": 63488 00:15:03.811 }, 00:15:03.811 { 00:15:03.811 "name": "BaseBdev3", 00:15:03.811 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:03.811 "is_configured": true, 00:15:03.811 "data_offset": 2048, 00:15:03.811 "data_size": 63488 00:15:03.811 }, 00:15:03.811 { 00:15:03.811 "name": "BaseBdev4", 00:15:03.811 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:03.811 "is_configured": true, 00:15:03.811 "data_offset": 2048, 00:15:03.811 "data_size": 63488 00:15:03.811 } 00:15:03.811 ] 00:15:03.811 }' 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.811 06:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.088 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.088 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.088 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.088 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 [2024-11-26 06:24:48.231308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.374 "name": "Existed_Raid", 00:15:04.374 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:04.374 "strip_size_kb": 0, 00:15:04.374 "state": "configuring", 00:15:04.374 "raid_level": "raid1", 00:15:04.374 "superblock": true, 00:15:04.374 "num_base_bdevs": 4, 00:15:04.374 "num_base_bdevs_discovered": 3, 00:15:04.374 "num_base_bdevs_operational": 4, 00:15:04.374 "base_bdevs_list": [ 00:15:04.374 { 00:15:04.374 "name": null, 00:15:04.374 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:04.374 "is_configured": false, 00:15:04.374 "data_offset": 0, 00:15:04.374 "data_size": 63488 00:15:04.374 }, 00:15:04.374 { 00:15:04.374 "name": "BaseBdev2", 00:15:04.374 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:04.374 "is_configured": true, 00:15:04.374 "data_offset": 2048, 00:15:04.374 "data_size": 63488 00:15:04.374 }, 00:15:04.374 { 00:15:04.374 "name": "BaseBdev3", 00:15:04.374 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:04.374 "is_configured": true, 00:15:04.374 "data_offset": 2048, 00:15:04.374 "data_size": 63488 00:15:04.374 }, 00:15:04.374 { 00:15:04.374 "name": "BaseBdev4", 00:15:04.374 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:04.374 "is_configured": true, 00:15:04.374 "data_offset": 2048, 00:15:04.374 "data_size": 63488 00:15:04.374 } 00:15:04.374 ] 00:15:04.374 }' 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.374 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.634 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.634 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.634 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.634 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.634 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1ddcb428-7ebc-4bee-a878-73c1f43f9b83 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.894 [2024-11-26 06:24:48.866749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:04.894 [2024-11-26 06:24:48.867047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:04.894 [2024-11-26 06:24:48.867094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:04.894 [2024-11-26 06:24:48.867435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:04.894 NewBaseBdev 00:15:04.894 [2024-11-26 06:24:48.867636] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:04.894 [2024-11-26 06:24:48.867661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:04.894 [2024-11-26 06:24:48.867894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.894 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.894 [ 00:15:04.894 { 00:15:04.894 "name": "NewBaseBdev", 00:15:04.894 "aliases": [ 00:15:04.894 "1ddcb428-7ebc-4bee-a878-73c1f43f9b83" 00:15:04.894 ], 00:15:04.894 "product_name": "Malloc disk", 00:15:04.894 "block_size": 512, 00:15:04.894 "num_blocks": 65536, 00:15:04.894 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:04.894 "assigned_rate_limits": { 00:15:04.894 "rw_ios_per_sec": 0, 00:15:04.894 "rw_mbytes_per_sec": 0, 00:15:04.894 "r_mbytes_per_sec": 0, 00:15:04.894 "w_mbytes_per_sec": 0 00:15:04.894 }, 00:15:04.894 "claimed": true, 00:15:04.894 "claim_type": "exclusive_write", 00:15:04.894 "zoned": false, 00:15:04.894 "supported_io_types": { 00:15:04.894 "read": true, 00:15:04.894 "write": true, 00:15:04.894 "unmap": true, 00:15:04.894 "flush": true, 00:15:04.894 "reset": true, 00:15:04.894 "nvme_admin": false, 00:15:04.894 "nvme_io": false, 00:15:04.894 "nvme_io_md": false, 00:15:04.894 "write_zeroes": true, 00:15:04.894 "zcopy": true, 00:15:04.894 "get_zone_info": false, 00:15:04.894 "zone_management": false, 00:15:04.894 "zone_append": false, 00:15:04.894 "compare": false, 00:15:04.894 "compare_and_write": false, 00:15:04.894 "abort": true, 00:15:04.894 "seek_hole": false, 00:15:04.894 "seek_data": false, 00:15:04.894 "copy": true, 00:15:04.894 "nvme_iov_md": false 00:15:04.894 }, 00:15:04.894 "memory_domains": [ 00:15:04.894 { 00:15:04.894 "dma_device_id": "system", 00:15:04.894 "dma_device_type": 1 00:15:04.894 }, 00:15:04.895 { 00:15:04.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.895 "dma_device_type": 2 00:15:04.895 } 00:15:04.895 ], 00:15:04.895 "driver_specific": {} 00:15:04.895 } 00:15:04.895 ] 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.895 "name": "Existed_Raid", 00:15:04.895 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:04.895 "strip_size_kb": 0, 00:15:04.895 "state": "online", 00:15:04.895 "raid_level": "raid1", 00:15:04.895 "superblock": true, 00:15:04.895 "num_base_bdevs": 4, 00:15:04.895 "num_base_bdevs_discovered": 4, 00:15:04.895 "num_base_bdevs_operational": 4, 00:15:04.895 "base_bdevs_list": [ 00:15:04.895 { 00:15:04.895 "name": "NewBaseBdev", 00:15:04.895 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:04.895 "is_configured": true, 00:15:04.895 "data_offset": 2048, 00:15:04.895 "data_size": 63488 00:15:04.895 }, 00:15:04.895 { 00:15:04.895 "name": "BaseBdev2", 00:15:04.895 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:04.895 "is_configured": true, 00:15:04.895 "data_offset": 2048, 00:15:04.895 "data_size": 63488 00:15:04.895 }, 00:15:04.895 { 00:15:04.895 "name": "BaseBdev3", 00:15:04.895 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:04.895 "is_configured": true, 00:15:04.895 "data_offset": 2048, 00:15:04.895 "data_size": 63488 00:15:04.895 }, 00:15:04.895 { 00:15:04.895 "name": "BaseBdev4", 00:15:04.895 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:04.895 "is_configured": true, 00:15:04.895 "data_offset": 2048, 00:15:04.895 "data_size": 63488 00:15:04.895 } 00:15:04.895 ] 00:15:04.895 }' 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.895 06:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.463 [2024-11-26 06:24:49.386453] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.463 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:05.463 "name": "Existed_Raid", 00:15:05.463 "aliases": [ 00:15:05.463 "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8" 00:15:05.463 ], 00:15:05.463 "product_name": "Raid Volume", 00:15:05.463 "block_size": 512, 00:15:05.463 "num_blocks": 63488, 00:15:05.463 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:05.463 "assigned_rate_limits": { 00:15:05.463 "rw_ios_per_sec": 0, 00:15:05.463 "rw_mbytes_per_sec": 0, 00:15:05.463 "r_mbytes_per_sec": 0, 00:15:05.463 "w_mbytes_per_sec": 0 00:15:05.463 }, 00:15:05.463 "claimed": false, 00:15:05.463 "zoned": false, 00:15:05.463 "supported_io_types": { 00:15:05.463 "read": true, 00:15:05.463 "write": true, 00:15:05.463 "unmap": false, 00:15:05.463 "flush": false, 00:15:05.463 "reset": true, 00:15:05.463 "nvme_admin": false, 00:15:05.463 "nvme_io": false, 00:15:05.463 "nvme_io_md": false, 00:15:05.463 "write_zeroes": true, 00:15:05.463 "zcopy": false, 00:15:05.463 "get_zone_info": false, 00:15:05.463 "zone_management": false, 00:15:05.463 "zone_append": false, 00:15:05.463 "compare": false, 00:15:05.463 "compare_and_write": false, 00:15:05.463 "abort": false, 00:15:05.463 "seek_hole": false, 00:15:05.463 "seek_data": false, 00:15:05.463 "copy": false, 00:15:05.463 "nvme_iov_md": false 00:15:05.463 }, 00:15:05.463 "memory_domains": [ 00:15:05.463 { 00:15:05.463 "dma_device_id": "system", 00:15:05.463 "dma_device_type": 1 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.463 "dma_device_type": 2 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "dma_device_id": "system", 00:15:05.463 "dma_device_type": 1 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.463 "dma_device_type": 2 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "dma_device_id": "system", 00:15:05.463 "dma_device_type": 1 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.463 "dma_device_type": 2 00:15:05.463 }, 00:15:05.463 { 00:15:05.463 "dma_device_id": "system", 00:15:05.463 "dma_device_type": 1 00:15:05.463 }, 00:15:05.464 { 00:15:05.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.464 "dma_device_type": 2 00:15:05.464 } 00:15:05.464 ], 00:15:05.464 "driver_specific": { 00:15:05.464 "raid": { 00:15:05.464 "uuid": "1f6cd1ab-6b03-4507-9e16-fa4f3378bca8", 00:15:05.464 "strip_size_kb": 0, 00:15:05.464 "state": "online", 00:15:05.464 "raid_level": "raid1", 00:15:05.464 "superblock": true, 00:15:05.464 "num_base_bdevs": 4, 00:15:05.464 "num_base_bdevs_discovered": 4, 00:15:05.464 "num_base_bdevs_operational": 4, 00:15:05.464 "base_bdevs_list": [ 00:15:05.464 { 00:15:05.464 "name": "NewBaseBdev", 00:15:05.464 "uuid": "1ddcb428-7ebc-4bee-a878-73c1f43f9b83", 00:15:05.464 "is_configured": true, 00:15:05.464 "data_offset": 2048, 00:15:05.464 "data_size": 63488 00:15:05.464 }, 00:15:05.464 { 00:15:05.464 "name": "BaseBdev2", 00:15:05.464 "uuid": "0f685c07-31b4-41ff-a0e7-2ba406f48196", 00:15:05.464 "is_configured": true, 00:15:05.464 "data_offset": 2048, 00:15:05.464 "data_size": 63488 00:15:05.464 }, 00:15:05.464 { 00:15:05.464 "name": "BaseBdev3", 00:15:05.464 "uuid": "55ba5c3f-d7e1-468c-bd0f-20fa1d74966f", 00:15:05.464 "is_configured": true, 00:15:05.464 "data_offset": 2048, 00:15:05.464 "data_size": 63488 00:15:05.464 }, 00:15:05.464 { 00:15:05.464 "name": "BaseBdev4", 00:15:05.464 "uuid": "63fe2354-6d29-4493-aecc-901bfa71b77e", 00:15:05.464 "is_configured": true, 00:15:05.464 "data_offset": 2048, 00:15:05.464 "data_size": 63488 00:15:05.464 } 00:15:05.464 ] 00:15:05.464 } 00:15:05.464 } 00:15:05.464 }' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:05.464 BaseBdev2 00:15:05.464 BaseBdev3 00:15:05.464 BaseBdev4' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.464 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:05.722 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.723 [2024-11-26 06:24:49.717452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:05.723 [2024-11-26 06:24:49.717502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.723 [2024-11-26 06:24:49.717609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.723 [2024-11-26 06:24:49.717936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.723 [2024-11-26 06:24:49.717952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74368 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74368 ']' 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74368 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74368 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74368' 00:15:05.723 killing process with pid 74368 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74368 00:15:05.723 [2024-11-26 06:24:49.767476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.723 06:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74368 00:15:06.289 [2024-11-26 06:24:50.253513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.667 06:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:07.667 00:15:07.667 real 0m12.025s 00:15:07.667 user 0m18.938s 00:15:07.667 sys 0m2.142s 00:15:07.667 ************************************ 00:15:07.667 END TEST raid_state_function_test_sb 00:15:07.667 ************************************ 00:15:07.667 06:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.667 06:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.667 06:24:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:07.667 06:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:07.667 06:24:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.667 06:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.667 ************************************ 00:15:07.667 START TEST raid_superblock_test 00:15:07.667 ************************************ 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75037 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75037 00:15:07.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75037 ']' 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.667 06:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.667 [2024-11-26 06:24:51.695937] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:07.667 [2024-11-26 06:24:51.696816] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75037 ] 00:15:07.927 [2024-11-26 06:24:51.863460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.927 [2024-11-26 06:24:51.983023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.187 [2024-11-26 06:24:52.186964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.187 [2024-11-26 06:24:52.187112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 malloc1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 [2024-11-26 06:24:52.639629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.756 [2024-11-26 06:24:52.639826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.756 [2024-11-26 06:24:52.639929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.756 [2024-11-26 06:24:52.639976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.756 [2024-11-26 06:24:52.642621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.756 [2024-11-26 06:24:52.642722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.756 pt1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 malloc2 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 [2024-11-26 06:24:52.701802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.756 [2024-11-26 06:24:52.701981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.756 [2024-11-26 06:24:52.702014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.756 [2024-11-26 06:24:52.702026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.756 [2024-11-26 06:24:52.704682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.756 [2024-11-26 06:24:52.704733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.756 pt2 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 malloc3 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 [2024-11-26 06:24:52.780892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:08.756 [2024-11-26 06:24:52.781040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.756 [2024-11-26 06:24:52.781112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.756 [2024-11-26 06:24:52.781167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.756 [2024-11-26 06:24:52.783709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.756 [2024-11-26 06:24:52.783793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:08.756 pt3 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 malloc4 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 [2024-11-26 06:24:52.843620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:08.756 [2024-11-26 06:24:52.843781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.756 [2024-11-26 06:24:52.843825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:08.756 [2024-11-26 06:24:52.843861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.756 [2024-11-26 06:24:52.846501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.756 [2024-11-26 06:24:52.846599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:08.756 pt4 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.756 [2024-11-26 06:24:52.855673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:08.756 [2024-11-26 06:24:52.857998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.756 [2024-11-26 06:24:52.858144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:08.756 [2024-11-26 06:24:52.858248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:08.756 [2024-11-26 06:24:52.858543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:08.756 [2024-11-26 06:24:52.858604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:08.756 [2024-11-26 06:24:52.859011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:08.756 [2024-11-26 06:24:52.859306] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:08.756 [2024-11-26 06:24:52.859370] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:08.756 [2024-11-26 06:24:52.859760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.756 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.757 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.015 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.015 "name": "raid_bdev1", 00:15:09.015 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:09.015 "strip_size_kb": 0, 00:15:09.015 "state": "online", 00:15:09.015 "raid_level": "raid1", 00:15:09.015 "superblock": true, 00:15:09.015 "num_base_bdevs": 4, 00:15:09.015 "num_base_bdevs_discovered": 4, 00:15:09.015 "num_base_bdevs_operational": 4, 00:15:09.015 "base_bdevs_list": [ 00:15:09.015 { 00:15:09.015 "name": "pt1", 00:15:09.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 }, 00:15:09.015 { 00:15:09.015 "name": "pt2", 00:15:09.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 }, 00:15:09.015 { 00:15:09.015 "name": "pt3", 00:15:09.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 }, 00:15:09.015 { 00:15:09.015 "name": "pt4", 00:15:09.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.015 "is_configured": true, 00:15:09.015 "data_offset": 2048, 00:15:09.015 "data_size": 63488 00:15:09.015 } 00:15:09.015 ] 00:15:09.015 }' 00:15:09.015 06:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.015 06:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:09.274 [2024-11-26 06:24:53.299417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.274 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.274 "name": "raid_bdev1", 00:15:09.274 "aliases": [ 00:15:09.274 "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587" 00:15:09.274 ], 00:15:09.274 "product_name": "Raid Volume", 00:15:09.274 "block_size": 512, 00:15:09.274 "num_blocks": 63488, 00:15:09.274 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:09.274 "assigned_rate_limits": { 00:15:09.274 "rw_ios_per_sec": 0, 00:15:09.274 "rw_mbytes_per_sec": 0, 00:15:09.274 "r_mbytes_per_sec": 0, 00:15:09.274 "w_mbytes_per_sec": 0 00:15:09.274 }, 00:15:09.274 "claimed": false, 00:15:09.274 "zoned": false, 00:15:09.274 "supported_io_types": { 00:15:09.274 "read": true, 00:15:09.274 "write": true, 00:15:09.274 "unmap": false, 00:15:09.274 "flush": false, 00:15:09.274 "reset": true, 00:15:09.274 "nvme_admin": false, 00:15:09.274 "nvme_io": false, 00:15:09.274 "nvme_io_md": false, 00:15:09.274 "write_zeroes": true, 00:15:09.274 "zcopy": false, 00:15:09.274 "get_zone_info": false, 00:15:09.274 "zone_management": false, 00:15:09.274 "zone_append": false, 00:15:09.274 "compare": false, 00:15:09.274 "compare_and_write": false, 00:15:09.274 "abort": false, 00:15:09.274 "seek_hole": false, 00:15:09.274 "seek_data": false, 00:15:09.274 "copy": false, 00:15:09.274 "nvme_iov_md": false 00:15:09.274 }, 00:15:09.274 "memory_domains": [ 00:15:09.274 { 00:15:09.274 "dma_device_id": "system", 00:15:09.274 "dma_device_type": 1 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.274 "dma_device_type": 2 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "system", 00:15:09.274 "dma_device_type": 1 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.274 "dma_device_type": 2 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "system", 00:15:09.274 "dma_device_type": 1 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.274 "dma_device_type": 2 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "system", 00:15:09.274 "dma_device_type": 1 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.274 "dma_device_type": 2 00:15:09.274 } 00:15:09.274 ], 00:15:09.274 "driver_specific": { 00:15:09.274 "raid": { 00:15:09.274 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:09.274 "strip_size_kb": 0, 00:15:09.274 "state": "online", 00:15:09.274 "raid_level": "raid1", 00:15:09.274 "superblock": true, 00:15:09.274 "num_base_bdevs": 4, 00:15:09.274 "num_base_bdevs_discovered": 4, 00:15:09.274 "num_base_bdevs_operational": 4, 00:15:09.274 "base_bdevs_list": [ 00:15:09.274 { 00:15:09.274 "name": "pt1", 00:15:09.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.274 "is_configured": true, 00:15:09.274 "data_offset": 2048, 00:15:09.274 "data_size": 63488 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "name": "pt2", 00:15:09.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.274 "is_configured": true, 00:15:09.274 "data_offset": 2048, 00:15:09.274 "data_size": 63488 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "name": "pt3", 00:15:09.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.274 "is_configured": true, 00:15:09.274 "data_offset": 2048, 00:15:09.274 "data_size": 63488 00:15:09.274 }, 00:15:09.274 { 00:15:09.274 "name": "pt4", 00:15:09.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.274 "is_configured": true, 00:15:09.274 "data_offset": 2048, 00:15:09.274 "data_size": 63488 00:15:09.274 } 00:15:09.274 ] 00:15:09.274 } 00:15:09.274 } 00:15:09.274 }' 00:15:09.275 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.275 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:09.275 pt2 00:15:09.275 pt3 00:15:09.275 pt4' 00:15:09.275 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.533 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.534 [2024-11-26 06:24:53.618822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587 ']' 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.534 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.793 [2024-11-26 06:24:53.666368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.793 [2024-11-26 06:24:53.666420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.793 [2024-11-26 06:24:53.666532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.793 [2024-11-26 06:24:53.666644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.793 [2024-11-26 06:24:53.666663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 [2024-11-26 06:24:53.838163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.794 [2024-11-26 06:24:53.840568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.794 [2024-11-26 06:24:53.840640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:09.794 [2024-11-26 06:24:53.840680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:09.794 [2024-11-26 06:24:53.840739] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.794 [2024-11-26 06:24:53.840804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.794 [2024-11-26 06:24:53.840827] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:09.794 [2024-11-26 06:24:53.840849] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:09.794 [2024-11-26 06:24:53.840864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.794 [2024-11-26 06:24:53.840878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:09.794 request: 00:15:09.794 { 00:15:09.794 "name": "raid_bdev1", 00:15:09.794 "raid_level": "raid1", 00:15:09.794 "base_bdevs": [ 00:15:09.794 "malloc1", 00:15:09.794 "malloc2", 00:15:09.794 "malloc3", 00:15:09.794 "malloc4" 00:15:09.794 ], 00:15:09.794 "superblock": false, 00:15:09.794 "method": "bdev_raid_create", 00:15:09.794 "req_id": 1 00:15:09.794 } 00:15:09.794 Got JSON-RPC error response 00:15:09.794 response: 00:15:09.794 { 00:15:09.794 "code": -17, 00:15:09.794 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.794 } 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 [2024-11-26 06:24:53.906013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.794 [2024-11-26 06:24:53.906195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.794 [2024-11-26 06:24:53.906261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.794 [2024-11-26 06:24:53.906311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.794 [2024-11-26 06:24:53.908980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.794 [2024-11-26 06:24:53.909115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.794 [2024-11-26 06:24:53.909281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:09.794 [2024-11-26 06:24:53.909410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.794 pt1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.794 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.054 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.054 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.054 "name": "raid_bdev1", 00:15:10.054 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:10.054 "strip_size_kb": 0, 00:15:10.054 "state": "configuring", 00:15:10.054 "raid_level": "raid1", 00:15:10.054 "superblock": true, 00:15:10.054 "num_base_bdevs": 4, 00:15:10.054 "num_base_bdevs_discovered": 1, 00:15:10.054 "num_base_bdevs_operational": 4, 00:15:10.054 "base_bdevs_list": [ 00:15:10.054 { 00:15:10.054 "name": "pt1", 00:15:10.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.054 "is_configured": true, 00:15:10.054 "data_offset": 2048, 00:15:10.054 "data_size": 63488 00:15:10.054 }, 00:15:10.054 { 00:15:10.054 "name": null, 00:15:10.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.054 "is_configured": false, 00:15:10.054 "data_offset": 2048, 00:15:10.054 "data_size": 63488 00:15:10.054 }, 00:15:10.054 { 00:15:10.054 "name": null, 00:15:10.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.054 "is_configured": false, 00:15:10.054 "data_offset": 2048, 00:15:10.054 "data_size": 63488 00:15:10.054 }, 00:15:10.054 { 00:15:10.054 "name": null, 00:15:10.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.054 "is_configured": false, 00:15:10.054 "data_offset": 2048, 00:15:10.054 "data_size": 63488 00:15:10.054 } 00:15:10.054 ] 00:15:10.054 }' 00:15:10.054 06:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.054 06:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.314 [2024-11-26 06:24:54.413272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.314 [2024-11-26 06:24:54.413378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.314 [2024-11-26 06:24:54.413401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:10.314 [2024-11-26 06:24:54.413414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.314 [2024-11-26 06:24:54.413935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.314 [2024-11-26 06:24:54.413961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.314 [2024-11-26 06:24:54.414078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.314 [2024-11-26 06:24:54.414118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.314 pt2 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.314 [2024-11-26 06:24:54.425291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.314 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.611 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.611 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.611 "name": "raid_bdev1", 00:15:10.611 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:10.611 "strip_size_kb": 0, 00:15:10.611 "state": "configuring", 00:15:10.611 "raid_level": "raid1", 00:15:10.611 "superblock": true, 00:15:10.611 "num_base_bdevs": 4, 00:15:10.611 "num_base_bdevs_discovered": 1, 00:15:10.611 "num_base_bdevs_operational": 4, 00:15:10.611 "base_bdevs_list": [ 00:15:10.611 { 00:15:10.611 "name": "pt1", 00:15:10.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.611 "is_configured": true, 00:15:10.611 "data_offset": 2048, 00:15:10.611 "data_size": 63488 00:15:10.611 }, 00:15:10.611 { 00:15:10.612 "name": null, 00:15:10.612 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.612 "is_configured": false, 00:15:10.612 "data_offset": 0, 00:15:10.612 "data_size": 63488 00:15:10.612 }, 00:15:10.612 { 00:15:10.612 "name": null, 00:15:10.612 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.612 "is_configured": false, 00:15:10.612 "data_offset": 2048, 00:15:10.612 "data_size": 63488 00:15:10.612 }, 00:15:10.612 { 00:15:10.612 "name": null, 00:15:10.612 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.612 "is_configured": false, 00:15:10.612 "data_offset": 2048, 00:15:10.612 "data_size": 63488 00:15:10.612 } 00:15:10.612 ] 00:15:10.612 }' 00:15:10.612 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.612 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.872 [2024-11-26 06:24:54.880700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.872 [2024-11-26 06:24:54.880884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.872 [2024-11-26 06:24:54.880960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:10.872 [2024-11-26 06:24:54.881066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.872 [2024-11-26 06:24:54.881646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.872 [2024-11-26 06:24:54.881720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.872 [2024-11-26 06:24:54.881871] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.872 [2024-11-26 06:24:54.881937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.872 pt2 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.872 [2024-11-26 06:24:54.892621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.872 [2024-11-26 06:24:54.892735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.872 [2024-11-26 06:24:54.892776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:10.872 [2024-11-26 06:24:54.892824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.872 [2024-11-26 06:24:54.893390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.872 [2024-11-26 06:24:54.893462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.872 [2024-11-26 06:24:54.893604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.872 [2024-11-26 06:24:54.893662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.872 pt3 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.872 [2024-11-26 06:24:54.904585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:10.872 [2024-11-26 06:24:54.904646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.872 [2024-11-26 06:24:54.904669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:10.872 [2024-11-26 06:24:54.904679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.872 [2024-11-26 06:24:54.905181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.872 [2024-11-26 06:24:54.905216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:10.872 [2024-11-26 06:24:54.905344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:10.872 [2024-11-26 06:24:54.905380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:10.872 [2024-11-26 06:24:54.905587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:10.872 [2024-11-26 06:24:54.905607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:10.872 [2024-11-26 06:24:54.905904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:10.872 [2024-11-26 06:24:54.906105] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:10.872 [2024-11-26 06:24:54.906123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:10.872 [2024-11-26 06:24:54.906307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.872 pt4 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.872 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.873 "name": "raid_bdev1", 00:15:10.873 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:10.873 "strip_size_kb": 0, 00:15:10.873 "state": "online", 00:15:10.873 "raid_level": "raid1", 00:15:10.873 "superblock": true, 00:15:10.873 "num_base_bdevs": 4, 00:15:10.873 "num_base_bdevs_discovered": 4, 00:15:10.873 "num_base_bdevs_operational": 4, 00:15:10.873 "base_bdevs_list": [ 00:15:10.873 { 00:15:10.873 "name": "pt1", 00:15:10.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.873 "is_configured": true, 00:15:10.873 "data_offset": 2048, 00:15:10.873 "data_size": 63488 00:15:10.873 }, 00:15:10.873 { 00:15:10.873 "name": "pt2", 00:15:10.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.873 "is_configured": true, 00:15:10.873 "data_offset": 2048, 00:15:10.873 "data_size": 63488 00:15:10.873 }, 00:15:10.873 { 00:15:10.873 "name": "pt3", 00:15:10.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.873 "is_configured": true, 00:15:10.873 "data_offset": 2048, 00:15:10.873 "data_size": 63488 00:15:10.873 }, 00:15:10.873 { 00:15:10.873 "name": "pt4", 00:15:10.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.873 "is_configured": true, 00:15:10.873 "data_offset": 2048, 00:15:10.873 "data_size": 63488 00:15:10.873 } 00:15:10.873 ] 00:15:10.873 }' 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.873 06:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.440 [2024-11-26 06:24:55.416472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.440 "name": "raid_bdev1", 00:15:11.440 "aliases": [ 00:15:11.440 "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587" 00:15:11.440 ], 00:15:11.440 "product_name": "Raid Volume", 00:15:11.440 "block_size": 512, 00:15:11.440 "num_blocks": 63488, 00:15:11.440 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:11.440 "assigned_rate_limits": { 00:15:11.440 "rw_ios_per_sec": 0, 00:15:11.440 "rw_mbytes_per_sec": 0, 00:15:11.440 "r_mbytes_per_sec": 0, 00:15:11.440 "w_mbytes_per_sec": 0 00:15:11.440 }, 00:15:11.440 "claimed": false, 00:15:11.440 "zoned": false, 00:15:11.440 "supported_io_types": { 00:15:11.440 "read": true, 00:15:11.440 "write": true, 00:15:11.440 "unmap": false, 00:15:11.440 "flush": false, 00:15:11.440 "reset": true, 00:15:11.440 "nvme_admin": false, 00:15:11.440 "nvme_io": false, 00:15:11.440 "nvme_io_md": false, 00:15:11.440 "write_zeroes": true, 00:15:11.440 "zcopy": false, 00:15:11.440 "get_zone_info": false, 00:15:11.440 "zone_management": false, 00:15:11.440 "zone_append": false, 00:15:11.440 "compare": false, 00:15:11.440 "compare_and_write": false, 00:15:11.440 "abort": false, 00:15:11.440 "seek_hole": false, 00:15:11.440 "seek_data": false, 00:15:11.440 "copy": false, 00:15:11.440 "nvme_iov_md": false 00:15:11.440 }, 00:15:11.440 "memory_domains": [ 00:15:11.440 { 00:15:11.440 "dma_device_id": "system", 00:15:11.440 "dma_device_type": 1 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.440 "dma_device_type": 2 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "system", 00:15:11.440 "dma_device_type": 1 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.440 "dma_device_type": 2 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "system", 00:15:11.440 "dma_device_type": 1 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.440 "dma_device_type": 2 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "system", 00:15:11.440 "dma_device_type": 1 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.440 "dma_device_type": 2 00:15:11.440 } 00:15:11.440 ], 00:15:11.440 "driver_specific": { 00:15:11.440 "raid": { 00:15:11.440 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:11.440 "strip_size_kb": 0, 00:15:11.440 "state": "online", 00:15:11.440 "raid_level": "raid1", 00:15:11.440 "superblock": true, 00:15:11.440 "num_base_bdevs": 4, 00:15:11.440 "num_base_bdevs_discovered": 4, 00:15:11.440 "num_base_bdevs_operational": 4, 00:15:11.440 "base_bdevs_list": [ 00:15:11.440 { 00:15:11.440 "name": "pt1", 00:15:11.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.440 "is_configured": true, 00:15:11.440 "data_offset": 2048, 00:15:11.440 "data_size": 63488 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "name": "pt2", 00:15:11.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.440 "is_configured": true, 00:15:11.440 "data_offset": 2048, 00:15:11.440 "data_size": 63488 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "name": "pt3", 00:15:11.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.440 "is_configured": true, 00:15:11.440 "data_offset": 2048, 00:15:11.440 "data_size": 63488 00:15:11.440 }, 00:15:11.440 { 00:15:11.440 "name": "pt4", 00:15:11.440 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.440 "is_configured": true, 00:15:11.440 "data_offset": 2048, 00:15:11.440 "data_size": 63488 00:15:11.440 } 00:15:11.440 ] 00:15:11.440 } 00:15:11.440 } 00:15:11.440 }' 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:11.440 pt2 00:15:11.440 pt3 00:15:11.440 pt4' 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.440 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:11.441 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.441 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.441 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 [2024-11-26 06:24:55.719884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587 '!=' 8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587 ']' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 [2024-11-26 06:24:55.767548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.699 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.699 "name": "raid_bdev1", 00:15:11.700 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:11.700 "strip_size_kb": 0, 00:15:11.700 "state": "online", 00:15:11.700 "raid_level": "raid1", 00:15:11.700 "superblock": true, 00:15:11.700 "num_base_bdevs": 4, 00:15:11.700 "num_base_bdevs_discovered": 3, 00:15:11.700 "num_base_bdevs_operational": 3, 00:15:11.700 "base_bdevs_list": [ 00:15:11.700 { 00:15:11.700 "name": null, 00:15:11.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.700 "is_configured": false, 00:15:11.700 "data_offset": 0, 00:15:11.700 "data_size": 63488 00:15:11.700 }, 00:15:11.700 { 00:15:11.700 "name": "pt2", 00:15:11.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.700 "is_configured": true, 00:15:11.700 "data_offset": 2048, 00:15:11.700 "data_size": 63488 00:15:11.700 }, 00:15:11.700 { 00:15:11.700 "name": "pt3", 00:15:11.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.700 "is_configured": true, 00:15:11.700 "data_offset": 2048, 00:15:11.700 "data_size": 63488 00:15:11.700 }, 00:15:11.700 { 00:15:11.700 "name": "pt4", 00:15:11.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.700 "is_configured": true, 00:15:11.700 "data_offset": 2048, 00:15:11.700 "data_size": 63488 00:15:11.700 } 00:15:11.700 ] 00:15:11.700 }' 00:15:11.700 06:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.700 06:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.268 [2024-11-26 06:24:56.242665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.268 [2024-11-26 06:24:56.242717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.268 [2024-11-26 06:24:56.242817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.268 [2024-11-26 06:24:56.242909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.268 [2024-11-26 06:24:56.242920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:12.268 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.269 [2024-11-26 06:24:56.338492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.269 [2024-11-26 06:24:56.338579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.269 [2024-11-26 06:24:56.338603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:12.269 [2024-11-26 06:24:56.338614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.269 [2024-11-26 06:24:56.341305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.269 [2024-11-26 06:24:56.341456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.269 [2024-11-26 06:24:56.341598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.269 [2024-11-26 06:24:56.341661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.269 pt2 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.269 "name": "raid_bdev1", 00:15:12.269 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:12.269 "strip_size_kb": 0, 00:15:12.269 "state": "configuring", 00:15:12.269 "raid_level": "raid1", 00:15:12.269 "superblock": true, 00:15:12.269 "num_base_bdevs": 4, 00:15:12.269 "num_base_bdevs_discovered": 1, 00:15:12.269 "num_base_bdevs_operational": 3, 00:15:12.269 "base_bdevs_list": [ 00:15:12.269 { 00:15:12.269 "name": null, 00:15:12.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.269 "is_configured": false, 00:15:12.269 "data_offset": 2048, 00:15:12.269 "data_size": 63488 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "name": "pt2", 00:15:12.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.269 "is_configured": true, 00:15:12.269 "data_offset": 2048, 00:15:12.269 "data_size": 63488 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "name": null, 00:15:12.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.269 "is_configured": false, 00:15:12.269 "data_offset": 2048, 00:15:12.269 "data_size": 63488 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "name": null, 00:15:12.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.269 "is_configured": false, 00:15:12.269 "data_offset": 2048, 00:15:12.269 "data_size": 63488 00:15:12.269 } 00:15:12.269 ] 00:15:12.269 }' 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.269 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.836 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:12.836 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:12.836 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.836 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.836 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.836 [2024-11-26 06:24:56.813790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.836 [2024-11-26 06:24:56.813977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.836 [2024-11-26 06:24:56.814021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:12.836 [2024-11-26 06:24:56.814095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.836 [2024-11-26 06:24:56.814668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.836 [2024-11-26 06:24:56.814739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.836 [2024-11-26 06:24:56.814888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:12.837 [2024-11-26 06:24:56.814954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.837 pt3 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.837 "name": "raid_bdev1", 00:15:12.837 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:12.837 "strip_size_kb": 0, 00:15:12.837 "state": "configuring", 00:15:12.837 "raid_level": "raid1", 00:15:12.837 "superblock": true, 00:15:12.837 "num_base_bdevs": 4, 00:15:12.837 "num_base_bdevs_discovered": 2, 00:15:12.837 "num_base_bdevs_operational": 3, 00:15:12.837 "base_bdevs_list": [ 00:15:12.837 { 00:15:12.837 "name": null, 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.837 "is_configured": false, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": "pt2", 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.837 "is_configured": true, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": "pt3", 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.837 "is_configured": true, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": null, 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.837 "is_configured": false, 00:15:12.837 "data_offset": 2048, 00:15:12.837 "data_size": 63488 00:15:12.837 } 00:15:12.837 ] 00:15:12.837 }' 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.837 06:24:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.404 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:13.404 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:13.404 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:13.404 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:13.404 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.405 [2024-11-26 06:24:57.309102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:13.405 [2024-11-26 06:24:57.309200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.405 [2024-11-26 06:24:57.309234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:13.405 [2024-11-26 06:24:57.309249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.405 [2024-11-26 06:24:57.309892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.405 [2024-11-26 06:24:57.309947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:13.405 [2024-11-26 06:24:57.310096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:13.405 [2024-11-26 06:24:57.310146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:13.405 [2024-11-26 06:24:57.310388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:13.405 [2024-11-26 06:24:57.310408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:13.405 [2024-11-26 06:24:57.310699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:13.405 [2024-11-26 06:24:57.310875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:13.405 [2024-11-26 06:24:57.310890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:13.405 [2024-11-26 06:24:57.311096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.405 pt4 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.405 "name": "raid_bdev1", 00:15:13.405 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:13.405 "strip_size_kb": 0, 00:15:13.405 "state": "online", 00:15:13.405 "raid_level": "raid1", 00:15:13.405 "superblock": true, 00:15:13.405 "num_base_bdevs": 4, 00:15:13.405 "num_base_bdevs_discovered": 3, 00:15:13.405 "num_base_bdevs_operational": 3, 00:15:13.405 "base_bdevs_list": [ 00:15:13.405 { 00:15:13.405 "name": null, 00:15:13.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.405 "is_configured": false, 00:15:13.405 "data_offset": 2048, 00:15:13.405 "data_size": 63488 00:15:13.405 }, 00:15:13.405 { 00:15:13.405 "name": "pt2", 00:15:13.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.405 "is_configured": true, 00:15:13.405 "data_offset": 2048, 00:15:13.405 "data_size": 63488 00:15:13.405 }, 00:15:13.405 { 00:15:13.405 "name": "pt3", 00:15:13.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.405 "is_configured": true, 00:15:13.405 "data_offset": 2048, 00:15:13.405 "data_size": 63488 00:15:13.405 }, 00:15:13.405 { 00:15:13.405 "name": "pt4", 00:15:13.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.405 "is_configured": true, 00:15:13.405 "data_offset": 2048, 00:15:13.405 "data_size": 63488 00:15:13.405 } 00:15:13.405 ] 00:15:13.405 }' 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.405 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.663 [2024-11-26 06:24:57.768368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.663 [2024-11-26 06:24:57.768528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.663 [2024-11-26 06:24:57.768673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.663 [2024-11-26 06:24:57.768790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.663 [2024-11-26 06:24:57.768840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:13.663 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 [2024-11-26 06:24:57.832253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:13.922 [2024-11-26 06:24:57.832435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.922 [2024-11-26 06:24:57.832519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:13.922 [2024-11-26 06:24:57.832585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.922 [2024-11-26 06:24:57.835224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.922 [2024-11-26 06:24:57.835308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:13.922 [2024-11-26 06:24:57.835467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:13.922 [2024-11-26 06:24:57.835577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:13.922 [2024-11-26 06:24:57.835773] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:13.922 [2024-11-26 06:24:57.835831] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.922 [2024-11-26 06:24:57.835918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:13.922 [2024-11-26 06:24:57.836079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.922 [2024-11-26 06:24:57.836251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:13.922 pt1 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.922 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.922 "name": "raid_bdev1", 00:15:13.922 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:13.922 "strip_size_kb": 0, 00:15:13.922 "state": "configuring", 00:15:13.922 "raid_level": "raid1", 00:15:13.922 "superblock": true, 00:15:13.922 "num_base_bdevs": 4, 00:15:13.922 "num_base_bdevs_discovered": 2, 00:15:13.922 "num_base_bdevs_operational": 3, 00:15:13.922 "base_bdevs_list": [ 00:15:13.922 { 00:15:13.922 "name": null, 00:15:13.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.923 "is_configured": false, 00:15:13.923 "data_offset": 2048, 00:15:13.923 "data_size": 63488 00:15:13.923 }, 00:15:13.923 { 00:15:13.923 "name": "pt2", 00:15:13.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.923 "is_configured": true, 00:15:13.923 "data_offset": 2048, 00:15:13.923 "data_size": 63488 00:15:13.923 }, 00:15:13.923 { 00:15:13.923 "name": "pt3", 00:15:13.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.923 "is_configured": true, 00:15:13.923 "data_offset": 2048, 00:15:13.923 "data_size": 63488 00:15:13.923 }, 00:15:13.923 { 00:15:13.923 "name": null, 00:15:13.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.923 "is_configured": false, 00:15:13.923 "data_offset": 2048, 00:15:13.923 "data_size": 63488 00:15:13.923 } 00:15:13.923 ] 00:15:13.923 }' 00:15:13.923 06:24:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.923 06:24:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.491 [2024-11-26 06:24:58.411407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:14.491 [2024-11-26 06:24:58.411549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.491 [2024-11-26 06:24:58.411651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:14.491 [2024-11-26 06:24:58.411706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.491 [2024-11-26 06:24:58.412311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.491 [2024-11-26 06:24:58.412380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:14.491 [2024-11-26 06:24:58.412602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:14.491 [2024-11-26 06:24:58.412693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:14.491 [2024-11-26 06:24:58.412909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:14.491 [2024-11-26 06:24:58.412924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.491 [2024-11-26 06:24:58.413225] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:14.491 [2024-11-26 06:24:58.413395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:14.491 [2024-11-26 06:24:58.413408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:14.491 [2024-11-26 06:24:58.413562] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.491 pt4 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.491 "name": "raid_bdev1", 00:15:14.491 "uuid": "8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587", 00:15:14.491 "strip_size_kb": 0, 00:15:14.491 "state": "online", 00:15:14.491 "raid_level": "raid1", 00:15:14.491 "superblock": true, 00:15:14.491 "num_base_bdevs": 4, 00:15:14.491 "num_base_bdevs_discovered": 3, 00:15:14.491 "num_base_bdevs_operational": 3, 00:15:14.491 "base_bdevs_list": [ 00:15:14.491 { 00:15:14.491 "name": null, 00:15:14.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.491 "is_configured": false, 00:15:14.491 "data_offset": 2048, 00:15:14.491 "data_size": 63488 00:15:14.491 }, 00:15:14.491 { 00:15:14.491 "name": "pt2", 00:15:14.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.491 "is_configured": true, 00:15:14.491 "data_offset": 2048, 00:15:14.491 "data_size": 63488 00:15:14.491 }, 00:15:14.491 { 00:15:14.491 "name": "pt3", 00:15:14.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.491 "is_configured": true, 00:15:14.491 "data_offset": 2048, 00:15:14.491 "data_size": 63488 00:15:14.491 }, 00:15:14.491 { 00:15:14.491 "name": "pt4", 00:15:14.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:14.491 "is_configured": true, 00:15:14.491 "data_offset": 2048, 00:15:14.491 "data_size": 63488 00:15:14.491 } 00:15:14.491 ] 00:15:14.491 }' 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.491 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.750 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:14.750 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:14.750 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.750 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.750 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:15.010 [2024-11-26 06:24:58.898932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587 '!=' 8d5a9f2b-0e8d-4fc6-9d64-3a39b7b94587 ']' 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75037 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75037 ']' 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75037 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75037 00:15:15.010 killing process with pid 75037 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75037' 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75037 00:15:15.010 [2024-11-26 06:24:58.974700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.010 [2024-11-26 06:24:58.974820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.010 06:24:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75037 00:15:15.010 [2024-11-26 06:24:58.974913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.010 [2024-11-26 06:24:58.974927] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:15.276 [2024-11-26 06:24:59.395103] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.660 ************************************ 00:15:16.660 END TEST raid_superblock_test 00:15:16.660 ************************************ 00:15:16.660 06:25:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:16.660 00:15:16.660 real 0m9.054s 00:15:16.660 user 0m14.139s 00:15:16.660 sys 0m1.614s 00:15:16.660 06:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.660 06:25:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.660 06:25:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:16.660 06:25:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:16.660 06:25:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.660 06:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.660 ************************************ 00:15:16.660 START TEST raid_read_error_test 00:15:16.660 ************************************ 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dqfRY523pd 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75531 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75531 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75531 ']' 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.660 06:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.919 [2024-11-26 06:25:00.829869] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:16.919 [2024-11-26 06:25:00.830143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75531 ] 00:15:16.920 [2024-11-26 06:25:01.016441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.178 [2024-11-26 06:25:01.176899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.437 [2024-11-26 06:25:01.447354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.437 [2024-11-26 06:25:01.447468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 BaseBdev1_malloc 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 true 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 [2024-11-26 06:25:01.809346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:17.696 [2024-11-26 06:25:01.809492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.696 [2024-11-26 06:25:01.809566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:17.696 [2024-11-26 06:25:01.809621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.696 [2024-11-26 06:25:01.812444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.696 [2024-11-26 06:25:01.812532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.696 BaseBdev1 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.697 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:17.697 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.697 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 BaseBdev2_malloc 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 true 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 [2024-11-26 06:25:01.887622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:17.957 [2024-11-26 06:25:01.887789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.957 [2024-11-26 06:25:01.887819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:17.957 [2024-11-26 06:25:01.887831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.957 [2024-11-26 06:25:01.890737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.957 [2024-11-26 06:25:01.890789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:17.957 BaseBdev2 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 BaseBdev3_malloc 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 true 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 [2024-11-26 06:25:01.978216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:17.957 [2024-11-26 06:25:01.978282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.957 [2024-11-26 06:25:01.978303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:17.957 [2024-11-26 06:25:01.978315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.957 [2024-11-26 06:25:01.980996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.957 [2024-11-26 06:25:01.981037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:17.957 BaseBdev3 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 BaseBdev4_malloc 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 true 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 [2024-11-26 06:25:02.060666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:17.957 [2024-11-26 06:25:02.060755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.957 [2024-11-26 06:25:02.060787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:17.957 [2024-11-26 06:25:02.060805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.957 [2024-11-26 06:25:02.063845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.957 [2024-11-26 06:25:02.063893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:17.957 BaseBdev4 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.957 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.957 [2024-11-26 06:25:02.072829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.957 [2024-11-26 06:25:02.075758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.957 [2024-11-26 06:25:02.075894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.957 [2024-11-26 06:25:02.076006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.957 [2024-11-26 06:25:02.076498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:17.957 [2024-11-26 06:25:02.078758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:17.957 [2024-11-26 06:25:02.079217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:17.957 [2024-11-26 06:25:02.079482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:17.958 [2024-11-26 06:25:02.079500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:17.958 [2024-11-26 06:25:02.079826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.958 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.217 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.217 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.217 "name": "raid_bdev1", 00:15:18.217 "uuid": "1177ea94-f0ed-4894-9434-cef62525cf5e", 00:15:18.217 "strip_size_kb": 0, 00:15:18.217 "state": "online", 00:15:18.217 "raid_level": "raid1", 00:15:18.217 "superblock": true, 00:15:18.217 "num_base_bdevs": 4, 00:15:18.217 "num_base_bdevs_discovered": 4, 00:15:18.217 "num_base_bdevs_operational": 4, 00:15:18.217 "base_bdevs_list": [ 00:15:18.217 { 00:15:18.217 "name": "BaseBdev1", 00:15:18.217 "uuid": "8de3873e-1647-50e6-8c8c-8c1dbd4525f6", 00:15:18.217 "is_configured": true, 00:15:18.217 "data_offset": 2048, 00:15:18.217 "data_size": 63488 00:15:18.217 }, 00:15:18.217 { 00:15:18.217 "name": "BaseBdev2", 00:15:18.217 "uuid": "13448238-02d0-5d36-8066-b409a727ba71", 00:15:18.217 "is_configured": true, 00:15:18.217 "data_offset": 2048, 00:15:18.217 "data_size": 63488 00:15:18.217 }, 00:15:18.217 { 00:15:18.217 "name": "BaseBdev3", 00:15:18.217 "uuid": "5e2e17c7-7f79-51d1-b7d2-dddb75eefabe", 00:15:18.217 "is_configured": true, 00:15:18.217 "data_offset": 2048, 00:15:18.217 "data_size": 63488 00:15:18.217 }, 00:15:18.217 { 00:15:18.217 "name": "BaseBdev4", 00:15:18.217 "uuid": "2ff18707-5e03-57cb-9708-e8b66924d5fc", 00:15:18.217 "is_configured": true, 00:15:18.217 "data_offset": 2048, 00:15:18.217 "data_size": 63488 00:15:18.217 } 00:15:18.217 ] 00:15:18.217 }' 00:15:18.217 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.217 06:25:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.477 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:18.477 06:25:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:18.736 [2024-11-26 06:25:02.672446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.673 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.673 "name": "raid_bdev1", 00:15:19.673 "uuid": "1177ea94-f0ed-4894-9434-cef62525cf5e", 00:15:19.673 "strip_size_kb": 0, 00:15:19.673 "state": "online", 00:15:19.673 "raid_level": "raid1", 00:15:19.673 "superblock": true, 00:15:19.673 "num_base_bdevs": 4, 00:15:19.673 "num_base_bdevs_discovered": 4, 00:15:19.673 "num_base_bdevs_operational": 4, 00:15:19.673 "base_bdevs_list": [ 00:15:19.673 { 00:15:19.673 "name": "BaseBdev1", 00:15:19.673 "uuid": "8de3873e-1647-50e6-8c8c-8c1dbd4525f6", 00:15:19.673 "is_configured": true, 00:15:19.673 "data_offset": 2048, 00:15:19.673 "data_size": 63488 00:15:19.673 }, 00:15:19.673 { 00:15:19.673 "name": "BaseBdev2", 00:15:19.673 "uuid": "13448238-02d0-5d36-8066-b409a727ba71", 00:15:19.673 "is_configured": true, 00:15:19.674 "data_offset": 2048, 00:15:19.674 "data_size": 63488 00:15:19.674 }, 00:15:19.674 { 00:15:19.674 "name": "BaseBdev3", 00:15:19.674 "uuid": "5e2e17c7-7f79-51d1-b7d2-dddb75eefabe", 00:15:19.674 "is_configured": true, 00:15:19.674 "data_offset": 2048, 00:15:19.674 "data_size": 63488 00:15:19.674 }, 00:15:19.674 { 00:15:19.674 "name": "BaseBdev4", 00:15:19.674 "uuid": "2ff18707-5e03-57cb-9708-e8b66924d5fc", 00:15:19.674 "is_configured": true, 00:15:19.674 "data_offset": 2048, 00:15:19.674 "data_size": 63488 00:15:19.674 } 00:15:19.674 ] 00:15:19.674 }' 00:15:19.674 06:25:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.674 06:25:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.240 06:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.240 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.240 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.240 [2024-11-26 06:25:04.095364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.240 [2024-11-26 06:25:04.095485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.240 [2024-11-26 06:25:04.098395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.240 [2024-11-26 06:25:04.098510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.240 [2024-11-26 06:25:04.098695] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.240 [2024-11-26 06:25:04.098751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:20.240 { 00:15:20.240 "results": [ 00:15:20.240 { 00:15:20.240 "job": "raid_bdev1", 00:15:20.240 "core_mask": "0x1", 00:15:20.240 "workload": "randrw", 00:15:20.240 "percentage": 50, 00:15:20.240 "status": "finished", 00:15:20.240 "queue_depth": 1, 00:15:20.240 "io_size": 131072, 00:15:20.240 "runtime": 1.423236, 00:15:20.240 "iops": 7150.606083601033, 00:15:20.240 "mibps": 893.8257604501291, 00:15:20.240 "io_failed": 0, 00:15:20.240 "io_timeout": 0, 00:15:20.240 "avg_latency_us": 137.01646035477722, 00:15:20.240 "min_latency_us": 23.58777292576419, 00:15:20.240 "max_latency_us": 1802.955458515284 00:15:20.240 } 00:15:20.240 ], 00:15:20.240 "core_count": 1 00:15:20.240 } 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75531 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75531 ']' 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75531 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75531 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75531' 00:15:20.241 killing process with pid 75531 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75531 00:15:20.241 06:25:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75531 00:15:20.241 [2024-11-26 06:25:04.135545] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.499 [2024-11-26 06:25:04.526306] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dqfRY523pd 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:21.887 00:15:21.887 real 0m5.242s 00:15:21.887 user 0m6.061s 00:15:21.887 sys 0m0.787s 00:15:21.887 ************************************ 00:15:21.887 END TEST raid_read_error_test 00:15:21.887 ************************************ 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.887 06:25:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.887 06:25:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:21.887 06:25:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:21.887 06:25:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.887 06:25:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.887 ************************************ 00:15:21.887 START TEST raid_write_error_test 00:15:21.887 ************************************ 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:21.887 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:22.145 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dbsjhtYIYM 00:15:22.145 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75677 00:15:22.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.145 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75677 00:15:22.145 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75677 ']' 00:15:22.145 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.145 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.146 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.146 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.146 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.146 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:22.146 [2024-11-26 06:25:06.118716] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:22.146 [2024-11-26 06:25:06.118861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75677 ] 00:15:22.404 [2024-11-26 06:25:06.304417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.404 [2024-11-26 06:25:06.447795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.662 [2024-11-26 06:25:06.706459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.662 [2024-11-26 06:25:06.706525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.921 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.921 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:22.921 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:22.921 06:25:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:22.921 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.921 06:25:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.921 BaseBdev1_malloc 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.921 true 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.921 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 [2024-11-26 06:25:07.057076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:23.182 [2024-11-26 06:25:07.057154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.182 [2024-11-26 06:25:07.057179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:23.182 [2024-11-26 06:25:07.057194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.182 [2024-11-26 06:25:07.059936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.182 [2024-11-26 06:25:07.059981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.182 BaseBdev1 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 BaseBdev2_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 true 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 [2024-11-26 06:25:07.137736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:23.182 [2024-11-26 06:25:07.137811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.182 [2024-11-26 06:25:07.137851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:23.182 [2024-11-26 06:25:07.137866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.182 [2024-11-26 06:25:07.140924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.182 [2024-11-26 06:25:07.141026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.182 BaseBdev2 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 BaseBdev3_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 true 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 [2024-11-26 06:25:07.234404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:23.182 [2024-11-26 06:25:07.234476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.182 [2024-11-26 06:25:07.234500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:23.182 [2024-11-26 06:25:07.234513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.182 [2024-11-26 06:25:07.237289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.182 [2024-11-26 06:25:07.237334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:23.182 BaseBdev3 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.182 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.182 BaseBdev4_malloc 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.183 true 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.183 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.443 [2024-11-26 06:25:07.317044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:23.443 [2024-11-26 06:25:07.317164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.443 [2024-11-26 06:25:07.317191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:23.443 [2024-11-26 06:25:07.317205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.443 [2024-11-26 06:25:07.320254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.443 [2024-11-26 06:25:07.320391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:23.443 BaseBdev4 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.443 [2024-11-26 06:25:07.329204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.443 [2024-11-26 06:25:07.331683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.443 [2024-11-26 06:25:07.331854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.443 [2024-11-26 06:25:07.331954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:23.443 [2024-11-26 06:25:07.332274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:23.443 [2024-11-26 06:25:07.332293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.443 [2024-11-26 06:25:07.332662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:23.443 [2024-11-26 06:25:07.332888] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:23.443 [2024-11-26 06:25:07.332899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:23.443 [2024-11-26 06:25:07.333236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.443 "name": "raid_bdev1", 00:15:23.443 "uuid": "e444d222-758e-42e5-a1e9-77a0e6d84dc7", 00:15:23.443 "strip_size_kb": 0, 00:15:23.443 "state": "online", 00:15:23.443 "raid_level": "raid1", 00:15:23.443 "superblock": true, 00:15:23.443 "num_base_bdevs": 4, 00:15:23.443 "num_base_bdevs_discovered": 4, 00:15:23.443 "num_base_bdevs_operational": 4, 00:15:23.443 "base_bdevs_list": [ 00:15:23.443 { 00:15:23.443 "name": "BaseBdev1", 00:15:23.443 "uuid": "63cd60d1-fbb6-563b-8971-94a1efe4d32a", 00:15:23.443 "is_configured": true, 00:15:23.443 "data_offset": 2048, 00:15:23.443 "data_size": 63488 00:15:23.443 }, 00:15:23.443 { 00:15:23.443 "name": "BaseBdev2", 00:15:23.443 "uuid": "00a13e6e-ecee-5021-a2fd-4f1caa50d9a0", 00:15:23.443 "is_configured": true, 00:15:23.443 "data_offset": 2048, 00:15:23.443 "data_size": 63488 00:15:23.443 }, 00:15:23.443 { 00:15:23.443 "name": "BaseBdev3", 00:15:23.443 "uuid": "9e675710-eaaf-5b95-ae70-c1c2e29408a3", 00:15:23.443 "is_configured": true, 00:15:23.443 "data_offset": 2048, 00:15:23.443 "data_size": 63488 00:15:23.443 }, 00:15:23.443 { 00:15:23.443 "name": "BaseBdev4", 00:15:23.443 "uuid": "c85cf9b0-2e65-5565-8532-d4d8a9fbe274", 00:15:23.443 "is_configured": true, 00:15:23.443 "data_offset": 2048, 00:15:23.443 "data_size": 63488 00:15:23.443 } 00:15:23.443 ] 00:15:23.443 }' 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.443 06:25:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.703 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:23.703 06:25:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:23.962 [2024-11-26 06:25:07.873921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.900 [2024-11-26 06:25:08.786094] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:24.900 [2024-11-26 06:25:08.786289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:24.900 [2024-11-26 06:25:08.786575] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.900 "name": "raid_bdev1", 00:15:24.900 "uuid": "e444d222-758e-42e5-a1e9-77a0e6d84dc7", 00:15:24.900 "strip_size_kb": 0, 00:15:24.900 "state": "online", 00:15:24.900 "raid_level": "raid1", 00:15:24.900 "superblock": true, 00:15:24.900 "num_base_bdevs": 4, 00:15:24.900 "num_base_bdevs_discovered": 3, 00:15:24.900 "num_base_bdevs_operational": 3, 00:15:24.900 "base_bdevs_list": [ 00:15:24.900 { 00:15:24.900 "name": null, 00:15:24.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.900 "is_configured": false, 00:15:24.900 "data_offset": 0, 00:15:24.900 "data_size": 63488 00:15:24.900 }, 00:15:24.900 { 00:15:24.900 "name": "BaseBdev2", 00:15:24.900 "uuid": "00a13e6e-ecee-5021-a2fd-4f1caa50d9a0", 00:15:24.900 "is_configured": true, 00:15:24.900 "data_offset": 2048, 00:15:24.900 "data_size": 63488 00:15:24.900 }, 00:15:24.900 { 00:15:24.900 "name": "BaseBdev3", 00:15:24.900 "uuid": "9e675710-eaaf-5b95-ae70-c1c2e29408a3", 00:15:24.900 "is_configured": true, 00:15:24.900 "data_offset": 2048, 00:15:24.900 "data_size": 63488 00:15:24.900 }, 00:15:24.900 { 00:15:24.900 "name": "BaseBdev4", 00:15:24.900 "uuid": "c85cf9b0-2e65-5565-8532-d4d8a9fbe274", 00:15:24.900 "is_configured": true, 00:15:24.900 "data_offset": 2048, 00:15:24.900 "data_size": 63488 00:15:24.900 } 00:15:24.900 ] 00:15:24.900 }' 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.900 06:25:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.160 [2024-11-26 06:25:09.256138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.160 [2024-11-26 06:25:09.256265] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.160 [2024-11-26 06:25:09.259439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.160 [2024-11-26 06:25:09.259552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.160 [2024-11-26 06:25:09.259728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.160 [2024-11-26 06:25:09.259787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:25.160 { 00:15:25.160 "results": [ 00:15:25.160 { 00:15:25.160 "job": "raid_bdev1", 00:15:25.160 "core_mask": "0x1", 00:15:25.160 "workload": "randrw", 00:15:25.160 "percentage": 50, 00:15:25.160 "status": "finished", 00:15:25.160 "queue_depth": 1, 00:15:25.160 "io_size": 131072, 00:15:25.160 "runtime": 1.382464, 00:15:25.160 "iops": 7592.241099949077, 00:15:25.160 "mibps": 949.0301374936346, 00:15:25.160 "io_failed": 0, 00:15:25.160 "io_timeout": 0, 00:15:25.160 "avg_latency_us": 128.70267866652466, 00:15:25.160 "min_latency_us": 25.2646288209607, 00:15:25.160 "max_latency_us": 1845.8829694323144 00:15:25.160 } 00:15:25.160 ], 00:15:25.160 "core_count": 1 00:15:25.160 } 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75677 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75677 ']' 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75677 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.160 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75677 00:15:25.420 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.420 killing process with pid 75677 00:15:25.420 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.420 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75677' 00:15:25.421 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75677 00:15:25.421 06:25:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75677 00:15:25.421 [2024-11-26 06:25:09.307846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.681 [2024-11-26 06:25:09.702409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dbsjhtYIYM 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:27.087 ************************************ 00:15:27.087 END TEST raid_write_error_test 00:15:27.087 ************************************ 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:27.087 00:15:27.087 real 0m5.102s 00:15:27.087 user 0m5.833s 00:15:27.087 sys 0m0.751s 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.087 06:25:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.087 06:25:11 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:15:27.087 06:25:11 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:27.087 06:25:11 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:15:27.087 06:25:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:27.087 06:25:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.087 06:25:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:27.087 ************************************ 00:15:27.087 START TEST raid_rebuild_test 00:15:27.087 ************************************ 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75826 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75826 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75826 ']' 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.087 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.393 06:25:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.393 [2024-11-26 06:25:11.287626] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:27.393 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:27.393 Zero copy mechanism will not be used. 00:15:27.393 [2024-11-26 06:25:11.287842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75826 ] 00:15:27.393 [2024-11-26 06:25:11.464677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.651 [2024-11-26 06:25:11.585102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.909 [2024-11-26 06:25:11.806411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.909 [2024-11-26 06:25:11.806453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.166 BaseBdev1_malloc 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.166 [2024-11-26 06:25:12.244617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.166 [2024-11-26 06:25:12.244702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.166 [2024-11-26 06:25:12.244730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:28.166 [2024-11-26 06:25:12.244743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.166 [2024-11-26 06:25:12.247034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.166 [2024-11-26 06:25:12.247090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.166 BaseBdev1 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.166 BaseBdev2_malloc 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.166 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.424 [2024-11-26 06:25:12.303000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:28.424 [2024-11-26 06:25:12.303203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.424 [2024-11-26 06:25:12.303261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:28.424 [2024-11-26 06:25:12.303290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.424 [2024-11-26 06:25:12.305932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.424 [2024-11-26 06:25:12.305979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:28.424 BaseBdev2 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.424 spare_malloc 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.424 spare_delay 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.424 [2024-11-26 06:25:12.381990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.424 [2024-11-26 06:25:12.382074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.424 [2024-11-26 06:25:12.382095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:28.424 [2024-11-26 06:25:12.382107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.424 [2024-11-26 06:25:12.384250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.424 [2024-11-26 06:25:12.384293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.424 spare 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.424 [2024-11-26 06:25:12.394022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.424 [2024-11-26 06:25:12.395824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.424 [2024-11-26 06:25:12.395997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:28.424 [2024-11-26 06:25:12.396016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:28.424 [2024-11-26 06:25:12.396298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:28.424 [2024-11-26 06:25:12.396484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:28.424 [2024-11-26 06:25:12.396497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:28.424 [2024-11-26 06:25:12.396655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.424 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.424 "name": "raid_bdev1", 00:15:28.424 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:28.424 "strip_size_kb": 0, 00:15:28.424 "state": "online", 00:15:28.424 "raid_level": "raid1", 00:15:28.424 "superblock": false, 00:15:28.424 "num_base_bdevs": 2, 00:15:28.424 "num_base_bdevs_discovered": 2, 00:15:28.424 "num_base_bdevs_operational": 2, 00:15:28.424 "base_bdevs_list": [ 00:15:28.424 { 00:15:28.424 "name": "BaseBdev1", 00:15:28.425 "uuid": "2c0fc816-699b-5507-857d-dbdd2b75006c", 00:15:28.425 "is_configured": true, 00:15:28.425 "data_offset": 0, 00:15:28.425 "data_size": 65536 00:15:28.425 }, 00:15:28.425 { 00:15:28.425 "name": "BaseBdev2", 00:15:28.425 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:28.425 "is_configured": true, 00:15:28.425 "data_offset": 0, 00:15:28.425 "data_size": 65536 00:15:28.425 } 00:15:28.425 ] 00:15:28.425 }' 00:15:28.425 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.425 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.991 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:28.991 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:28.991 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.991 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.991 [2024-11-26 06:25:12.893550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.992 06:25:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:29.249 [2024-11-26 06:25:13.196771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:29.249 /dev/nbd0 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.249 1+0 records in 00:15:29.249 1+0 records out 00:15:29.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393779 s, 10.4 MB/s 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:29.249 06:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:29.250 06:25:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:34.520 65536+0 records in 00:15:34.520 65536+0 records out 00:15:34.520 33554432 bytes (34 MB, 32 MiB) copied, 5.06885 s, 6.6 MB/s 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.520 [2024-11-26 06:25:18.589124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.520 [2024-11-26 06:25:18.625163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.520 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.521 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.779 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.779 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.779 "name": "raid_bdev1", 00:15:34.779 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:34.779 "strip_size_kb": 0, 00:15:34.779 "state": "online", 00:15:34.779 "raid_level": "raid1", 00:15:34.779 "superblock": false, 00:15:34.779 "num_base_bdevs": 2, 00:15:34.779 "num_base_bdevs_discovered": 1, 00:15:34.779 "num_base_bdevs_operational": 1, 00:15:34.779 "base_bdevs_list": [ 00:15:34.779 { 00:15:34.779 "name": null, 00:15:34.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.779 "is_configured": false, 00:15:34.779 "data_offset": 0, 00:15:34.779 "data_size": 65536 00:15:34.779 }, 00:15:34.779 { 00:15:34.779 "name": "BaseBdev2", 00:15:34.779 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 0, 00:15:34.779 "data_size": 65536 00:15:34.779 } 00:15:34.779 ] 00:15:34.779 }' 00:15:34.779 06:25:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.779 06:25:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.037 06:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.037 06:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.037 06:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.037 [2024-11-26 06:25:19.112362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.037 [2024-11-26 06:25:19.130063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:35.037 06:25:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.037 06:25:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.037 [2024-11-26 06:25:19.132190] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.410 "name": "raid_bdev1", 00:15:36.410 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:36.410 "strip_size_kb": 0, 00:15:36.410 "state": "online", 00:15:36.410 "raid_level": "raid1", 00:15:36.410 "superblock": false, 00:15:36.410 "num_base_bdevs": 2, 00:15:36.410 "num_base_bdevs_discovered": 2, 00:15:36.410 "num_base_bdevs_operational": 2, 00:15:36.410 "process": { 00:15:36.410 "type": "rebuild", 00:15:36.410 "target": "spare", 00:15:36.410 "progress": { 00:15:36.410 "blocks": 20480, 00:15:36.410 "percent": 31 00:15:36.410 } 00:15:36.410 }, 00:15:36.410 "base_bdevs_list": [ 00:15:36.410 { 00:15:36.410 "name": "spare", 00:15:36.410 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:36.410 "is_configured": true, 00:15:36.410 "data_offset": 0, 00:15:36.410 "data_size": 65536 00:15:36.410 }, 00:15:36.410 { 00:15:36.410 "name": "BaseBdev2", 00:15:36.410 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:36.410 "is_configured": true, 00:15:36.410 "data_offset": 0, 00:15:36.410 "data_size": 65536 00:15:36.410 } 00:15:36.410 ] 00:15:36.410 }' 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.410 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.411 [2024-11-26 06:25:20.279238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.411 [2024-11-26 06:25:20.338382] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.411 [2024-11-26 06:25:20.338471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.411 [2024-11-26 06:25:20.338488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.411 [2024-11-26 06:25:20.338498] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.411 "name": "raid_bdev1", 00:15:36.411 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:36.411 "strip_size_kb": 0, 00:15:36.411 "state": "online", 00:15:36.411 "raid_level": "raid1", 00:15:36.411 "superblock": false, 00:15:36.411 "num_base_bdevs": 2, 00:15:36.411 "num_base_bdevs_discovered": 1, 00:15:36.411 "num_base_bdevs_operational": 1, 00:15:36.411 "base_bdevs_list": [ 00:15:36.411 { 00:15:36.411 "name": null, 00:15:36.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.411 "is_configured": false, 00:15:36.411 "data_offset": 0, 00:15:36.411 "data_size": 65536 00:15:36.411 }, 00:15:36.411 { 00:15:36.411 "name": "BaseBdev2", 00:15:36.411 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:36.411 "is_configured": true, 00:15:36.411 "data_offset": 0, 00:15:36.411 "data_size": 65536 00:15:36.411 } 00:15:36.411 ] 00:15:36.411 }' 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.411 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.978 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.978 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.978 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.978 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.978 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.978 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.979 "name": "raid_bdev1", 00:15:36.979 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:36.979 "strip_size_kb": 0, 00:15:36.979 "state": "online", 00:15:36.979 "raid_level": "raid1", 00:15:36.979 "superblock": false, 00:15:36.979 "num_base_bdevs": 2, 00:15:36.979 "num_base_bdevs_discovered": 1, 00:15:36.979 "num_base_bdevs_operational": 1, 00:15:36.979 "base_bdevs_list": [ 00:15:36.979 { 00:15:36.979 "name": null, 00:15:36.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.979 "is_configured": false, 00:15:36.979 "data_offset": 0, 00:15:36.979 "data_size": 65536 00:15:36.979 }, 00:15:36.979 { 00:15:36.979 "name": "BaseBdev2", 00:15:36.979 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:36.979 "is_configured": true, 00:15:36.979 "data_offset": 0, 00:15:36.979 "data_size": 65536 00:15:36.979 } 00:15:36.979 ] 00:15:36.979 }' 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.979 06:25:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.979 [2024-11-26 06:25:20.999309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.979 [2024-11-26 06:25:21.015103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:36.979 06:25:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.979 06:25:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:36.979 [2024-11-26 06:25:21.017339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 06:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.174 "name": "raid_bdev1", 00:15:38.174 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:38.174 "strip_size_kb": 0, 00:15:38.174 "state": "online", 00:15:38.174 "raid_level": "raid1", 00:15:38.174 "superblock": false, 00:15:38.174 "num_base_bdevs": 2, 00:15:38.174 "num_base_bdevs_discovered": 2, 00:15:38.174 "num_base_bdevs_operational": 2, 00:15:38.174 "process": { 00:15:38.174 "type": "rebuild", 00:15:38.174 "target": "spare", 00:15:38.174 "progress": { 00:15:38.174 "blocks": 20480, 00:15:38.174 "percent": 31 00:15:38.174 } 00:15:38.174 }, 00:15:38.174 "base_bdevs_list": [ 00:15:38.174 { 00:15:38.174 "name": "spare", 00:15:38.174 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:38.174 "is_configured": true, 00:15:38.174 "data_offset": 0, 00:15:38.174 "data_size": 65536 00:15:38.174 }, 00:15:38.174 { 00:15:38.174 "name": "BaseBdev2", 00:15:38.174 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:38.174 "is_configured": true, 00:15:38.174 "data_offset": 0, 00:15:38.174 "data_size": 65536 00:15:38.174 } 00:15:38.174 ] 00:15:38.174 }' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.174 "name": "raid_bdev1", 00:15:38.174 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:38.174 "strip_size_kb": 0, 00:15:38.174 "state": "online", 00:15:38.174 "raid_level": "raid1", 00:15:38.174 "superblock": false, 00:15:38.174 "num_base_bdevs": 2, 00:15:38.174 "num_base_bdevs_discovered": 2, 00:15:38.174 "num_base_bdevs_operational": 2, 00:15:38.174 "process": { 00:15:38.174 "type": "rebuild", 00:15:38.174 "target": "spare", 00:15:38.174 "progress": { 00:15:38.174 "blocks": 22528, 00:15:38.174 "percent": 34 00:15:38.174 } 00:15:38.174 }, 00:15:38.174 "base_bdevs_list": [ 00:15:38.174 { 00:15:38.174 "name": "spare", 00:15:38.174 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:38.174 "is_configured": true, 00:15:38.174 "data_offset": 0, 00:15:38.174 "data_size": 65536 00:15:38.174 }, 00:15:38.174 { 00:15:38.174 "name": "BaseBdev2", 00:15:38.174 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:38.174 "is_configured": true, 00:15:38.174 "data_offset": 0, 00:15:38.174 "data_size": 65536 00:15:38.174 } 00:15:38.174 ] 00:15:38.174 }' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.174 06:25:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.549 "name": "raid_bdev1", 00:15:39.549 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:39.549 "strip_size_kb": 0, 00:15:39.549 "state": "online", 00:15:39.549 "raid_level": "raid1", 00:15:39.549 "superblock": false, 00:15:39.549 "num_base_bdevs": 2, 00:15:39.549 "num_base_bdevs_discovered": 2, 00:15:39.549 "num_base_bdevs_operational": 2, 00:15:39.549 "process": { 00:15:39.549 "type": "rebuild", 00:15:39.549 "target": "spare", 00:15:39.549 "progress": { 00:15:39.549 "blocks": 45056, 00:15:39.549 "percent": 68 00:15:39.549 } 00:15:39.549 }, 00:15:39.549 "base_bdevs_list": [ 00:15:39.549 { 00:15:39.549 "name": "spare", 00:15:39.549 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:39.549 "is_configured": true, 00:15:39.549 "data_offset": 0, 00:15:39.549 "data_size": 65536 00:15:39.549 }, 00:15:39.549 { 00:15:39.549 "name": "BaseBdev2", 00:15:39.549 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:39.549 "is_configured": true, 00:15:39.549 "data_offset": 0, 00:15:39.549 "data_size": 65536 00:15:39.549 } 00:15:39.549 ] 00:15:39.549 }' 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.549 06:25:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.182 [2024-11-26 06:25:24.233458] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:40.182 [2024-11-26 06:25:24.233625] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:40.182 [2024-11-26 06:25:24.233692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.441 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.441 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.441 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.441 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.441 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.442 "name": "raid_bdev1", 00:15:40.442 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:40.442 "strip_size_kb": 0, 00:15:40.442 "state": "online", 00:15:40.442 "raid_level": "raid1", 00:15:40.442 "superblock": false, 00:15:40.442 "num_base_bdevs": 2, 00:15:40.442 "num_base_bdevs_discovered": 2, 00:15:40.442 "num_base_bdevs_operational": 2, 00:15:40.442 "base_bdevs_list": [ 00:15:40.442 { 00:15:40.442 "name": "spare", 00:15:40.442 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:40.442 "is_configured": true, 00:15:40.442 "data_offset": 0, 00:15:40.442 "data_size": 65536 00:15:40.442 }, 00:15:40.442 { 00:15:40.442 "name": "BaseBdev2", 00:15:40.442 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:40.442 "is_configured": true, 00:15:40.442 "data_offset": 0, 00:15:40.442 "data_size": 65536 00:15:40.442 } 00:15:40.442 ] 00:15:40.442 }' 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:40.442 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.701 "name": "raid_bdev1", 00:15:40.701 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:40.701 "strip_size_kb": 0, 00:15:40.701 "state": "online", 00:15:40.701 "raid_level": "raid1", 00:15:40.701 "superblock": false, 00:15:40.701 "num_base_bdevs": 2, 00:15:40.701 "num_base_bdevs_discovered": 2, 00:15:40.701 "num_base_bdevs_operational": 2, 00:15:40.701 "base_bdevs_list": [ 00:15:40.701 { 00:15:40.701 "name": "spare", 00:15:40.701 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:40.701 "is_configured": true, 00:15:40.701 "data_offset": 0, 00:15:40.701 "data_size": 65536 00:15:40.701 }, 00:15:40.701 { 00:15:40.701 "name": "BaseBdev2", 00:15:40.701 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:40.701 "is_configured": true, 00:15:40.701 "data_offset": 0, 00:15:40.701 "data_size": 65536 00:15:40.701 } 00:15:40.701 ] 00:15:40.701 }' 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.701 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.701 "name": "raid_bdev1", 00:15:40.701 "uuid": "40950acd-26e0-472d-8b6d-f03d8517f62a", 00:15:40.701 "strip_size_kb": 0, 00:15:40.701 "state": "online", 00:15:40.701 "raid_level": "raid1", 00:15:40.701 "superblock": false, 00:15:40.701 "num_base_bdevs": 2, 00:15:40.702 "num_base_bdevs_discovered": 2, 00:15:40.702 "num_base_bdevs_operational": 2, 00:15:40.702 "base_bdevs_list": [ 00:15:40.702 { 00:15:40.702 "name": "spare", 00:15:40.702 "uuid": "f8eb1a08-d8e6-50c1-9dd0-ac0a2ce8a255", 00:15:40.702 "is_configured": true, 00:15:40.702 "data_offset": 0, 00:15:40.702 "data_size": 65536 00:15:40.702 }, 00:15:40.702 { 00:15:40.702 "name": "BaseBdev2", 00:15:40.702 "uuid": "f1b30101-f4e3-56d0-b670-e0d58acb0de8", 00:15:40.702 "is_configured": true, 00:15:40.702 "data_offset": 0, 00:15:40.702 "data_size": 65536 00:15:40.702 } 00:15:40.702 ] 00:15:40.702 }' 00:15:40.702 06:25:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.702 06:25:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.269 [2024-11-26 06:25:25.194350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.269 [2024-11-26 06:25:25.194407] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.269 [2024-11-26 06:25:25.194513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.269 [2024-11-26 06:25:25.194590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.269 [2024-11-26 06:25:25.194601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.269 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:41.528 /dev/nbd0 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.528 1+0 records in 00:15:41.528 1+0 records out 00:15:41.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596712 s, 6.9 MB/s 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.528 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:41.786 /dev/nbd1 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.786 1+0 records in 00:15:41.786 1+0 records out 00:15:41.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411557 s, 10.0 MB/s 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.786 06:25:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:42.044 06:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:42.045 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:42.045 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.045 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.045 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:42.045 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.045 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.304 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75826 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75826 ']' 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75826 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75826 00:15:42.565 killing process with pid 75826 00:15:42.565 Received shutdown signal, test time was about 60.000000 seconds 00:15:42.565 00:15:42.565 Latency(us) 00:15:42.565 [2024-11-26T06:25:26.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.565 [2024-11-26T06:25:26.702Z] =================================================================================================================== 00:15:42.565 [2024-11-26T06:25:26.702Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75826' 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75826 00:15:42.565 06:25:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75826 00:15:42.565 [2024-11-26 06:25:26.691951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.135 [2024-11-26 06:25:27.022013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.075 06:25:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:44.075 00:15:44.075 real 0m17.009s 00:15:44.075 user 0m18.683s 00:15:44.075 sys 0m3.781s 00:15:44.075 ************************************ 00:15:44.075 END TEST raid_rebuild_test 00:15:44.075 ************************************ 00:15:44.075 06:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.075 06:25:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.335 06:25:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:44.335 06:25:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:44.335 06:25:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.335 06:25:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.335 ************************************ 00:15:44.335 START TEST raid_rebuild_test_sb 00:15:44.335 ************************************ 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76264 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76264 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76264 ']' 00:15:44.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.335 06:25:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.335 [2024-11-26 06:25:28.379922] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:15:44.335 [2024-11-26 06:25:28.380228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76264 ] 00:15:44.335 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:44.335 Zero copy mechanism will not be used. 00:15:44.595 [2024-11-26 06:25:28.548512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.595 [2024-11-26 06:25:28.674715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.855 [2024-11-26 06:25:28.894224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.855 [2024-11-26 06:25:28.894292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.114 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.114 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:45.114 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.114 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:45.114 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.114 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.373 BaseBdev1_malloc 00:15:45.373 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.373 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:45.373 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.373 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.373 [2024-11-26 06:25:29.296032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:45.374 [2024-11-26 06:25:29.296124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.374 [2024-11-26 06:25:29.296149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:45.374 [2024-11-26 06:25:29.296162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.374 [2024-11-26 06:25:29.298464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.374 [2024-11-26 06:25:29.298508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:45.374 BaseBdev1 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 BaseBdev2_malloc 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 [2024-11-26 06:25:29.353682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:45.374 [2024-11-26 06:25:29.353758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.374 [2024-11-26 06:25:29.353777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:45.374 [2024-11-26 06:25:29.353789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.374 [2024-11-26 06:25:29.355905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.374 [2024-11-26 06:25:29.355946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:45.374 BaseBdev2 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 spare_malloc 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 spare_delay 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 [2024-11-26 06:25:29.434269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:45.374 [2024-11-26 06:25:29.434336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.374 [2024-11-26 06:25:29.434354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:45.374 [2024-11-26 06:25:29.434364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.374 [2024-11-26 06:25:29.436438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.374 [2024-11-26 06:25:29.436480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:45.374 spare 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 [2024-11-26 06:25:29.446311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.374 [2024-11-26 06:25:29.448153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.374 [2024-11-26 06:25:29.448310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.374 [2024-11-26 06:25:29.448327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:45.374 [2024-11-26 06:25:29.448580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:45.374 [2024-11-26 06:25:29.448746] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.374 [2024-11-26 06:25:29.448755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.374 [2024-11-26 06:25:29.448895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.374 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.634 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.634 "name": "raid_bdev1", 00:15:45.634 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:45.634 "strip_size_kb": 0, 00:15:45.634 "state": "online", 00:15:45.634 "raid_level": "raid1", 00:15:45.634 "superblock": true, 00:15:45.634 "num_base_bdevs": 2, 00:15:45.634 "num_base_bdevs_discovered": 2, 00:15:45.634 "num_base_bdevs_operational": 2, 00:15:45.634 "base_bdevs_list": [ 00:15:45.634 { 00:15:45.634 "name": "BaseBdev1", 00:15:45.634 "uuid": "b30e7ff5-8608-575b-b9c6-6447a72b5a1a", 00:15:45.634 "is_configured": true, 00:15:45.634 "data_offset": 2048, 00:15:45.634 "data_size": 63488 00:15:45.634 }, 00:15:45.634 { 00:15:45.634 "name": "BaseBdev2", 00:15:45.634 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:45.634 "is_configured": true, 00:15:45.634 "data_offset": 2048, 00:15:45.634 "data_size": 63488 00:15:45.634 } 00:15:45.634 ] 00:15:45.634 }' 00:15:45.634 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.634 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.897 [2024-11-26 06:25:29.885984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:45.897 06:25:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:46.158 [2024-11-26 06:25:30.201238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:46.158 /dev/nbd0 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.158 1+0 records in 00:15:46.158 1+0 records out 00:15:46.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693101 s, 5.9 MB/s 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:46.158 06:25:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:51.434 63488+0 records in 00:15:51.434 63488+0 records out 00:15:51.434 32505856 bytes (33 MB, 31 MiB) copied, 4.84715 s, 6.7 MB/s 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.434 [2024-11-26 06:25:35.357929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 [2024-11-26 06:25:35.374577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.434 "name": "raid_bdev1", 00:15:51.434 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:51.434 "strip_size_kb": 0, 00:15:51.434 "state": "online", 00:15:51.434 "raid_level": "raid1", 00:15:51.434 "superblock": true, 00:15:51.434 "num_base_bdevs": 2, 00:15:51.434 "num_base_bdevs_discovered": 1, 00:15:51.434 "num_base_bdevs_operational": 1, 00:15:51.434 "base_bdevs_list": [ 00:15:51.434 { 00:15:51.434 "name": null, 00:15:51.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.434 "is_configured": false, 00:15:51.434 "data_offset": 0, 00:15:51.434 "data_size": 63488 00:15:51.434 }, 00:15:51.434 { 00:15:51.434 "name": "BaseBdev2", 00:15:51.434 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:51.434 "is_configured": true, 00:15:51.434 "data_offset": 2048, 00:15:51.434 "data_size": 63488 00:15:51.434 } 00:15:51.434 ] 00:15:51.434 }' 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.434 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.694 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.694 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.694 [2024-11-26 06:25:35.789959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.694 [2024-11-26 06:25:35.810733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:51.694 06:25:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.694 06:25:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:51.694 [2024-11-26 06:25:35.813122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.096 "name": "raid_bdev1", 00:15:53.096 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:53.096 "strip_size_kb": 0, 00:15:53.096 "state": "online", 00:15:53.096 "raid_level": "raid1", 00:15:53.096 "superblock": true, 00:15:53.096 "num_base_bdevs": 2, 00:15:53.096 "num_base_bdevs_discovered": 2, 00:15:53.096 "num_base_bdevs_operational": 2, 00:15:53.096 "process": { 00:15:53.096 "type": "rebuild", 00:15:53.096 "target": "spare", 00:15:53.096 "progress": { 00:15:53.096 "blocks": 20480, 00:15:53.096 "percent": 32 00:15:53.096 } 00:15:53.096 }, 00:15:53.096 "base_bdevs_list": [ 00:15:53.096 { 00:15:53.096 "name": "spare", 00:15:53.096 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:53.096 "is_configured": true, 00:15:53.096 "data_offset": 2048, 00:15:53.096 "data_size": 63488 00:15:53.096 }, 00:15:53.096 { 00:15:53.096 "name": "BaseBdev2", 00:15:53.096 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:53.096 "is_configured": true, 00:15:53.096 "data_offset": 2048, 00:15:53.096 "data_size": 63488 00:15:53.096 } 00:15:53.096 ] 00:15:53.096 }' 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 06:25:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 [2024-11-26 06:25:36.967998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.096 [2024-11-26 06:25:37.019765] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.096 [2024-11-26 06:25:37.019890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.096 [2024-11-26 06:25:37.019910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.096 [2024-11-26 06:25:37.019922] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.096 "name": "raid_bdev1", 00:15:53.096 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:53.096 "strip_size_kb": 0, 00:15:53.096 "state": "online", 00:15:53.096 "raid_level": "raid1", 00:15:53.096 "superblock": true, 00:15:53.096 "num_base_bdevs": 2, 00:15:53.096 "num_base_bdevs_discovered": 1, 00:15:53.096 "num_base_bdevs_operational": 1, 00:15:53.096 "base_bdevs_list": [ 00:15:53.096 { 00:15:53.096 "name": null, 00:15:53.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.096 "is_configured": false, 00:15:53.096 "data_offset": 0, 00:15:53.096 "data_size": 63488 00:15:53.096 }, 00:15:53.096 { 00:15:53.096 "name": "BaseBdev2", 00:15:53.096 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:53.096 "is_configured": true, 00:15:53.096 "data_offset": 2048, 00:15:53.096 "data_size": 63488 00:15:53.096 } 00:15:53.096 ] 00:15:53.096 }' 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.096 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.664 "name": "raid_bdev1", 00:15:53.664 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:53.664 "strip_size_kb": 0, 00:15:53.664 "state": "online", 00:15:53.664 "raid_level": "raid1", 00:15:53.664 "superblock": true, 00:15:53.664 "num_base_bdevs": 2, 00:15:53.664 "num_base_bdevs_discovered": 1, 00:15:53.664 "num_base_bdevs_operational": 1, 00:15:53.664 "base_bdevs_list": [ 00:15:53.664 { 00:15:53.664 "name": null, 00:15:53.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.664 "is_configured": false, 00:15:53.664 "data_offset": 0, 00:15:53.664 "data_size": 63488 00:15:53.664 }, 00:15:53.664 { 00:15:53.664 "name": "BaseBdev2", 00:15:53.664 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:53.664 "is_configured": true, 00:15:53.664 "data_offset": 2048, 00:15:53.664 "data_size": 63488 00:15:53.664 } 00:15:53.664 ] 00:15:53.664 }' 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.664 [2024-11-26 06:25:37.651940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.664 [2024-11-26 06:25:37.671638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.664 06:25:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:53.664 [2024-11-26 06:25:37.673941] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.599 "name": "raid_bdev1", 00:15:54.599 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:54.599 "strip_size_kb": 0, 00:15:54.599 "state": "online", 00:15:54.599 "raid_level": "raid1", 00:15:54.599 "superblock": true, 00:15:54.599 "num_base_bdevs": 2, 00:15:54.599 "num_base_bdevs_discovered": 2, 00:15:54.599 "num_base_bdevs_operational": 2, 00:15:54.599 "process": { 00:15:54.599 "type": "rebuild", 00:15:54.599 "target": "spare", 00:15:54.599 "progress": { 00:15:54.599 "blocks": 20480, 00:15:54.599 "percent": 32 00:15:54.599 } 00:15:54.599 }, 00:15:54.599 "base_bdevs_list": [ 00:15:54.599 { 00:15:54.599 "name": "spare", 00:15:54.599 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:54.599 "is_configured": true, 00:15:54.599 "data_offset": 2048, 00:15:54.599 "data_size": 63488 00:15:54.599 }, 00:15:54.599 { 00:15:54.599 "name": "BaseBdev2", 00:15:54.599 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:54.599 "is_configured": true, 00:15:54.599 "data_offset": 2048, 00:15:54.599 "data_size": 63488 00:15:54.599 } 00:15:54.599 ] 00:15:54.599 }' 00:15:54.599 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:54.857 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.857 "name": "raid_bdev1", 00:15:54.857 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:54.857 "strip_size_kb": 0, 00:15:54.857 "state": "online", 00:15:54.857 "raid_level": "raid1", 00:15:54.857 "superblock": true, 00:15:54.857 "num_base_bdevs": 2, 00:15:54.857 "num_base_bdevs_discovered": 2, 00:15:54.857 "num_base_bdevs_operational": 2, 00:15:54.857 "process": { 00:15:54.857 "type": "rebuild", 00:15:54.857 "target": "spare", 00:15:54.857 "progress": { 00:15:54.857 "blocks": 22528, 00:15:54.857 "percent": 35 00:15:54.857 } 00:15:54.857 }, 00:15:54.857 "base_bdevs_list": [ 00:15:54.857 { 00:15:54.857 "name": "spare", 00:15:54.857 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:54.857 "is_configured": true, 00:15:54.857 "data_offset": 2048, 00:15:54.857 "data_size": 63488 00:15:54.857 }, 00:15:54.857 { 00:15:54.857 "name": "BaseBdev2", 00:15:54.857 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:54.857 "is_configured": true, 00:15:54.857 "data_offset": 2048, 00:15:54.857 "data_size": 63488 00:15:54.857 } 00:15:54.857 ] 00:15:54.857 }' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.857 06:25:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.846 06:25:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.104 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.104 "name": "raid_bdev1", 00:15:56.104 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:56.104 "strip_size_kb": 0, 00:15:56.104 "state": "online", 00:15:56.104 "raid_level": "raid1", 00:15:56.104 "superblock": true, 00:15:56.104 "num_base_bdevs": 2, 00:15:56.104 "num_base_bdevs_discovered": 2, 00:15:56.104 "num_base_bdevs_operational": 2, 00:15:56.104 "process": { 00:15:56.104 "type": "rebuild", 00:15:56.104 "target": "spare", 00:15:56.104 "progress": { 00:15:56.104 "blocks": 45056, 00:15:56.104 "percent": 70 00:15:56.104 } 00:15:56.104 }, 00:15:56.104 "base_bdevs_list": [ 00:15:56.104 { 00:15:56.104 "name": "spare", 00:15:56.104 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:56.104 "is_configured": true, 00:15:56.104 "data_offset": 2048, 00:15:56.104 "data_size": 63488 00:15:56.104 }, 00:15:56.104 { 00:15:56.104 "name": "BaseBdev2", 00:15:56.104 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:56.104 "is_configured": true, 00:15:56.104 "data_offset": 2048, 00:15:56.104 "data_size": 63488 00:15:56.104 } 00:15:56.104 ] 00:15:56.104 }' 00:15:56.104 06:25:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.104 06:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.104 06:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.104 06:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.104 06:25:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.672 [2024-11-26 06:25:40.790102] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:56.672 [2024-11-26 06:25:40.790293] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:56.672 [2024-11-26 06:25:40.790471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.240 "name": "raid_bdev1", 00:15:57.240 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:57.240 "strip_size_kb": 0, 00:15:57.240 "state": "online", 00:15:57.240 "raid_level": "raid1", 00:15:57.240 "superblock": true, 00:15:57.240 "num_base_bdevs": 2, 00:15:57.240 "num_base_bdevs_discovered": 2, 00:15:57.240 "num_base_bdevs_operational": 2, 00:15:57.240 "base_bdevs_list": [ 00:15:57.240 { 00:15:57.240 "name": "spare", 00:15:57.240 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 }, 00:15:57.240 { 00:15:57.240 "name": "BaseBdev2", 00:15:57.240 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 } 00:15:57.240 ] 00:15:57.240 }' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.240 "name": "raid_bdev1", 00:15:57.240 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:57.240 "strip_size_kb": 0, 00:15:57.240 "state": "online", 00:15:57.240 "raid_level": "raid1", 00:15:57.240 "superblock": true, 00:15:57.240 "num_base_bdevs": 2, 00:15:57.240 "num_base_bdevs_discovered": 2, 00:15:57.240 "num_base_bdevs_operational": 2, 00:15:57.240 "base_bdevs_list": [ 00:15:57.240 { 00:15:57.240 "name": "spare", 00:15:57.240 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 }, 00:15:57.240 { 00:15:57.240 "name": "BaseBdev2", 00:15:57.240 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:57.240 "is_configured": true, 00:15:57.240 "data_offset": 2048, 00:15:57.240 "data_size": 63488 00:15:57.240 } 00:15:57.240 ] 00:15:57.240 }' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.240 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.499 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.499 "name": "raid_bdev1", 00:15:57.499 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:57.499 "strip_size_kb": 0, 00:15:57.499 "state": "online", 00:15:57.499 "raid_level": "raid1", 00:15:57.499 "superblock": true, 00:15:57.499 "num_base_bdevs": 2, 00:15:57.499 "num_base_bdevs_discovered": 2, 00:15:57.500 "num_base_bdevs_operational": 2, 00:15:57.500 "base_bdevs_list": [ 00:15:57.500 { 00:15:57.500 "name": "spare", 00:15:57.500 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:57.500 "is_configured": true, 00:15:57.500 "data_offset": 2048, 00:15:57.500 "data_size": 63488 00:15:57.500 }, 00:15:57.500 { 00:15:57.500 "name": "BaseBdev2", 00:15:57.500 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:57.500 "is_configured": true, 00:15:57.500 "data_offset": 2048, 00:15:57.500 "data_size": 63488 00:15:57.500 } 00:15:57.500 ] 00:15:57.500 }' 00:15:57.500 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.500 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.759 [2024-11-26 06:25:41.842260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.759 [2024-11-26 06:25:41.842317] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.759 [2024-11-26 06:25:41.842419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.759 [2024-11-26 06:25:41.842501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.759 [2024-11-26 06:25:41.842513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:57.759 06:25:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:58.018 /dev/nbd0 00:15:58.018 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.278 1+0 records in 00:15:58.278 1+0 records out 00:15:58.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388722 s, 10.5 MB/s 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.278 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:58.278 /dev/nbd1 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.537 1+0 records in 00:15:58.537 1+0 records out 00:15:58.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026528 s, 15.4 MB/s 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.537 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.797 06:25:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.062 [2024-11-26 06:25:43.152367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.062 [2024-11-26 06:25:43.152471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.062 [2024-11-26 06:25:43.152502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:59.062 [2024-11-26 06:25:43.152514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.062 [2024-11-26 06:25:43.155201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.062 [2024-11-26 06:25:43.155248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.062 [2024-11-26 06:25:43.155362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:59.062 [2024-11-26 06:25:43.155426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.062 [2024-11-26 06:25:43.155610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.062 spare 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.062 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.321 [2024-11-26 06:25:43.255553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:59.321 [2024-11-26 06:25:43.255629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.321 [2024-11-26 06:25:43.256045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:59.321 [2024-11-26 06:25:43.256300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:59.321 [2024-11-26 06:25:43.256316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:59.321 [2024-11-26 06:25:43.256687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.321 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.322 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.322 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.322 "name": "raid_bdev1", 00:15:59.322 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:59.322 "strip_size_kb": 0, 00:15:59.322 "state": "online", 00:15:59.322 "raid_level": "raid1", 00:15:59.322 "superblock": true, 00:15:59.322 "num_base_bdevs": 2, 00:15:59.322 "num_base_bdevs_discovered": 2, 00:15:59.322 "num_base_bdevs_operational": 2, 00:15:59.322 "base_bdevs_list": [ 00:15:59.322 { 00:15:59.322 "name": "spare", 00:15:59.322 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:59.322 "is_configured": true, 00:15:59.322 "data_offset": 2048, 00:15:59.322 "data_size": 63488 00:15:59.322 }, 00:15:59.322 { 00:15:59.322 "name": "BaseBdev2", 00:15:59.322 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:59.322 "is_configured": true, 00:15:59.322 "data_offset": 2048, 00:15:59.322 "data_size": 63488 00:15:59.322 } 00:15:59.322 ] 00:15:59.322 }' 00:15:59.322 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.322 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.581 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.581 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.581 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.581 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.581 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.840 "name": "raid_bdev1", 00:15:59.840 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:15:59.840 "strip_size_kb": 0, 00:15:59.840 "state": "online", 00:15:59.840 "raid_level": "raid1", 00:15:59.840 "superblock": true, 00:15:59.840 "num_base_bdevs": 2, 00:15:59.840 "num_base_bdevs_discovered": 2, 00:15:59.840 "num_base_bdevs_operational": 2, 00:15:59.840 "base_bdevs_list": [ 00:15:59.840 { 00:15:59.840 "name": "spare", 00:15:59.840 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:15:59.840 "is_configured": true, 00:15:59.840 "data_offset": 2048, 00:15:59.840 "data_size": 63488 00:15:59.840 }, 00:15:59.840 { 00:15:59.840 "name": "BaseBdev2", 00:15:59.840 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:15:59.840 "is_configured": true, 00:15:59.840 "data_offset": 2048, 00:15:59.840 "data_size": 63488 00:15:59.840 } 00:15:59.840 ] 00:15:59.840 }' 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.840 [2024-11-26 06:25:43.919496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.840 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.841 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.841 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.099 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.099 "name": "raid_bdev1", 00:16:00.099 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:00.099 "strip_size_kb": 0, 00:16:00.099 "state": "online", 00:16:00.099 "raid_level": "raid1", 00:16:00.099 "superblock": true, 00:16:00.099 "num_base_bdevs": 2, 00:16:00.099 "num_base_bdevs_discovered": 1, 00:16:00.099 "num_base_bdevs_operational": 1, 00:16:00.099 "base_bdevs_list": [ 00:16:00.099 { 00:16:00.099 "name": null, 00:16:00.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.099 "is_configured": false, 00:16:00.099 "data_offset": 0, 00:16:00.099 "data_size": 63488 00:16:00.099 }, 00:16:00.099 { 00:16:00.099 "name": "BaseBdev2", 00:16:00.099 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:00.099 "is_configured": true, 00:16:00.099 "data_offset": 2048, 00:16:00.099 "data_size": 63488 00:16:00.099 } 00:16:00.099 ] 00:16:00.099 }' 00:16:00.099 06:25:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.099 06:25:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.358 06:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.358 06:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.358 06:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.358 [2024-11-26 06:25:44.414719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.358 [2024-11-26 06:25:44.415119] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:00.358 [2024-11-26 06:25:44.415149] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:00.358 [2024-11-26 06:25:44.415196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.358 [2024-11-26 06:25:44.435158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:00.358 06:25:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.358 06:25:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:00.358 [2024-11-26 06:25:44.437447] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.737 "name": "raid_bdev1", 00:16:01.737 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:01.737 "strip_size_kb": 0, 00:16:01.737 "state": "online", 00:16:01.737 "raid_level": "raid1", 00:16:01.737 "superblock": true, 00:16:01.737 "num_base_bdevs": 2, 00:16:01.737 "num_base_bdevs_discovered": 2, 00:16:01.737 "num_base_bdevs_operational": 2, 00:16:01.737 "process": { 00:16:01.737 "type": "rebuild", 00:16:01.737 "target": "spare", 00:16:01.737 "progress": { 00:16:01.737 "blocks": 20480, 00:16:01.737 "percent": 32 00:16:01.737 } 00:16:01.737 }, 00:16:01.737 "base_bdevs_list": [ 00:16:01.737 { 00:16:01.737 "name": "spare", 00:16:01.737 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:16:01.737 "is_configured": true, 00:16:01.737 "data_offset": 2048, 00:16:01.737 "data_size": 63488 00:16:01.737 }, 00:16:01.737 { 00:16:01.737 "name": "BaseBdev2", 00:16:01.737 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:01.737 "is_configured": true, 00:16:01.737 "data_offset": 2048, 00:16:01.737 "data_size": 63488 00:16:01.737 } 00:16:01.737 ] 00:16:01.737 }' 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.737 [2024-11-26 06:25:45.576347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.737 [2024-11-26 06:25:45.643935] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:01.737 [2024-11-26 06:25:45.644172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.737 [2024-11-26 06:25:45.644230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:01.737 [2024-11-26 06:25:45.644280] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.737 "name": "raid_bdev1", 00:16:01.737 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:01.737 "strip_size_kb": 0, 00:16:01.737 "state": "online", 00:16:01.737 "raid_level": "raid1", 00:16:01.737 "superblock": true, 00:16:01.737 "num_base_bdevs": 2, 00:16:01.737 "num_base_bdevs_discovered": 1, 00:16:01.737 "num_base_bdevs_operational": 1, 00:16:01.737 "base_bdevs_list": [ 00:16:01.737 { 00:16:01.737 "name": null, 00:16:01.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.737 "is_configured": false, 00:16:01.737 "data_offset": 0, 00:16:01.737 "data_size": 63488 00:16:01.737 }, 00:16:01.737 { 00:16:01.737 "name": "BaseBdev2", 00:16:01.737 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:01.737 "is_configured": true, 00:16:01.737 "data_offset": 2048, 00:16:01.737 "data_size": 63488 00:16:01.737 } 00:16:01.737 ] 00:16:01.737 }' 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.737 06:25:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.997 06:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.997 06:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.997 06:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.997 [2024-11-26 06:25:46.119886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.997 [2024-11-26 06:25:46.120095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.997 [2024-11-26 06:25:46.120155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:01.997 [2024-11-26 06:25:46.120206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.997 [2024-11-26 06:25:46.120836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.997 [2024-11-26 06:25:46.120918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.997 [2024-11-26 06:25:46.121095] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:01.997 [2024-11-26 06:25:46.121148] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:01.997 [2024-11-26 06:25:46.121209] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:01.997 [2024-11-26 06:25:46.121304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.290 [2024-11-26 06:25:46.141317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:02.290 spare 00:16:02.290 06:25:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.290 06:25:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:02.290 [2024-11-26 06:25:46.143648] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.228 "name": "raid_bdev1", 00:16:03.228 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:03.228 "strip_size_kb": 0, 00:16:03.228 "state": "online", 00:16:03.228 "raid_level": "raid1", 00:16:03.228 "superblock": true, 00:16:03.228 "num_base_bdevs": 2, 00:16:03.228 "num_base_bdevs_discovered": 2, 00:16:03.228 "num_base_bdevs_operational": 2, 00:16:03.228 "process": { 00:16:03.228 "type": "rebuild", 00:16:03.228 "target": "spare", 00:16:03.228 "progress": { 00:16:03.228 "blocks": 20480, 00:16:03.228 "percent": 32 00:16:03.228 } 00:16:03.228 }, 00:16:03.228 "base_bdevs_list": [ 00:16:03.228 { 00:16:03.228 "name": "spare", 00:16:03.228 "uuid": "03efc4de-5a2d-51ef-aebe-ee4c6c5df1b9", 00:16:03.228 "is_configured": true, 00:16:03.228 "data_offset": 2048, 00:16:03.228 "data_size": 63488 00:16:03.228 }, 00:16:03.228 { 00:16:03.228 "name": "BaseBdev2", 00:16:03.228 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:03.228 "is_configured": true, 00:16:03.228 "data_offset": 2048, 00:16:03.228 "data_size": 63488 00:16:03.228 } 00:16:03.228 ] 00:16:03.228 }' 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.228 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.228 [2024-11-26 06:25:47.302802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.228 [2024-11-26 06:25:47.349720] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.228 [2024-11-26 06:25:47.349809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.228 [2024-11-26 06:25:47.349828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.228 [2024-11-26 06:25:47.349835] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.488 "name": "raid_bdev1", 00:16:03.488 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:03.488 "strip_size_kb": 0, 00:16:03.488 "state": "online", 00:16:03.488 "raid_level": "raid1", 00:16:03.488 "superblock": true, 00:16:03.488 "num_base_bdevs": 2, 00:16:03.488 "num_base_bdevs_discovered": 1, 00:16:03.488 "num_base_bdevs_operational": 1, 00:16:03.488 "base_bdevs_list": [ 00:16:03.488 { 00:16:03.488 "name": null, 00:16:03.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.488 "is_configured": false, 00:16:03.488 "data_offset": 0, 00:16:03.488 "data_size": 63488 00:16:03.488 }, 00:16:03.488 { 00:16:03.488 "name": "BaseBdev2", 00:16:03.488 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:03.488 "is_configured": true, 00:16:03.488 "data_offset": 2048, 00:16:03.488 "data_size": 63488 00:16:03.488 } 00:16:03.488 ] 00:16:03.488 }' 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.488 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.748 06:25:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.006 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.006 "name": "raid_bdev1", 00:16:04.006 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:04.006 "strip_size_kb": 0, 00:16:04.006 "state": "online", 00:16:04.006 "raid_level": "raid1", 00:16:04.006 "superblock": true, 00:16:04.007 "num_base_bdevs": 2, 00:16:04.007 "num_base_bdevs_discovered": 1, 00:16:04.007 "num_base_bdevs_operational": 1, 00:16:04.007 "base_bdevs_list": [ 00:16:04.007 { 00:16:04.007 "name": null, 00:16:04.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.007 "is_configured": false, 00:16:04.007 "data_offset": 0, 00:16:04.007 "data_size": 63488 00:16:04.007 }, 00:16:04.007 { 00:16:04.007 "name": "BaseBdev2", 00:16:04.007 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:04.007 "is_configured": true, 00:16:04.007 "data_offset": 2048, 00:16:04.007 "data_size": 63488 00:16:04.007 } 00:16:04.007 ] 00:16:04.007 }' 00:16:04.007 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.007 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.007 06:25:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.007 [2024-11-26 06:25:48.021918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.007 [2024-11-26 06:25:48.021998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.007 [2024-11-26 06:25:48.022023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:04.007 [2024-11-26 06:25:48.022042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.007 [2024-11-26 06:25:48.022541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.007 [2024-11-26 06:25:48.022570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.007 [2024-11-26 06:25:48.022656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:04.007 [2024-11-26 06:25:48.022671] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:04.007 [2024-11-26 06:25:48.022682] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.007 [2024-11-26 06:25:48.022693] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:04.007 BaseBdev1 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.007 06:25:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.944 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.213 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.213 "name": "raid_bdev1", 00:16:05.213 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:05.213 "strip_size_kb": 0, 00:16:05.213 "state": "online", 00:16:05.213 "raid_level": "raid1", 00:16:05.213 "superblock": true, 00:16:05.213 "num_base_bdevs": 2, 00:16:05.213 "num_base_bdevs_discovered": 1, 00:16:05.213 "num_base_bdevs_operational": 1, 00:16:05.213 "base_bdevs_list": [ 00:16:05.213 { 00:16:05.213 "name": null, 00:16:05.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.213 "is_configured": false, 00:16:05.213 "data_offset": 0, 00:16:05.213 "data_size": 63488 00:16:05.214 }, 00:16:05.214 { 00:16:05.214 "name": "BaseBdev2", 00:16:05.214 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:05.214 "is_configured": true, 00:16:05.214 "data_offset": 2048, 00:16:05.214 "data_size": 63488 00:16:05.214 } 00:16:05.214 ] 00:16:05.214 }' 00:16:05.214 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.214 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.481 "name": "raid_bdev1", 00:16:05.481 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:05.481 "strip_size_kb": 0, 00:16:05.481 "state": "online", 00:16:05.481 "raid_level": "raid1", 00:16:05.481 "superblock": true, 00:16:05.481 "num_base_bdevs": 2, 00:16:05.481 "num_base_bdevs_discovered": 1, 00:16:05.481 "num_base_bdevs_operational": 1, 00:16:05.481 "base_bdevs_list": [ 00:16:05.481 { 00:16:05.481 "name": null, 00:16:05.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.481 "is_configured": false, 00:16:05.481 "data_offset": 0, 00:16:05.481 "data_size": 63488 00:16:05.481 }, 00:16:05.481 { 00:16:05.481 "name": "BaseBdev2", 00:16:05.481 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:05.481 "is_configured": true, 00:16:05.481 "data_offset": 2048, 00:16:05.481 "data_size": 63488 00:16:05.481 } 00:16:05.481 ] 00:16:05.481 }' 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.481 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.481 [2024-11-26 06:25:49.607371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.481 [2024-11-26 06:25:49.607710] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:05.481 [2024-11-26 06:25:49.607781] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:05.481 request: 00:16:05.481 { 00:16:05.481 "base_bdev": "BaseBdev1", 00:16:05.740 "raid_bdev": "raid_bdev1", 00:16:05.740 "method": "bdev_raid_add_base_bdev", 00:16:05.740 "req_id": 1 00:16:05.740 } 00:16:05.740 Got JSON-RPC error response 00:16:05.740 response: 00:16:05.740 { 00:16:05.740 "code": -22, 00:16:05.740 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:05.740 } 00:16:05.740 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:05.740 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:05.740 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:05.740 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:05.740 06:25:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:05.740 06:25:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.677 "name": "raid_bdev1", 00:16:06.677 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:06.677 "strip_size_kb": 0, 00:16:06.677 "state": "online", 00:16:06.677 "raid_level": "raid1", 00:16:06.677 "superblock": true, 00:16:06.677 "num_base_bdevs": 2, 00:16:06.677 "num_base_bdevs_discovered": 1, 00:16:06.677 "num_base_bdevs_operational": 1, 00:16:06.677 "base_bdevs_list": [ 00:16:06.677 { 00:16:06.677 "name": null, 00:16:06.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.677 "is_configured": false, 00:16:06.677 "data_offset": 0, 00:16:06.677 "data_size": 63488 00:16:06.677 }, 00:16:06.677 { 00:16:06.677 "name": "BaseBdev2", 00:16:06.677 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:06.677 "is_configured": true, 00:16:06.677 "data_offset": 2048, 00:16:06.677 "data_size": 63488 00:16:06.677 } 00:16:06.677 ] 00:16:06.677 }' 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.677 06:25:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.295 "name": "raid_bdev1", 00:16:07.295 "uuid": "74f8c10f-683a-41c2-8879-98019f1e4f92", 00:16:07.295 "strip_size_kb": 0, 00:16:07.295 "state": "online", 00:16:07.295 "raid_level": "raid1", 00:16:07.295 "superblock": true, 00:16:07.295 "num_base_bdevs": 2, 00:16:07.295 "num_base_bdevs_discovered": 1, 00:16:07.295 "num_base_bdevs_operational": 1, 00:16:07.295 "base_bdevs_list": [ 00:16:07.295 { 00:16:07.295 "name": null, 00:16:07.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.295 "is_configured": false, 00:16:07.295 "data_offset": 0, 00:16:07.295 "data_size": 63488 00:16:07.295 }, 00:16:07.295 { 00:16:07.295 "name": "BaseBdev2", 00:16:07.295 "uuid": "7b520862-4006-510a-a41b-f6286ad125b3", 00:16:07.295 "is_configured": true, 00:16:07.295 "data_offset": 2048, 00:16:07.295 "data_size": 63488 00:16:07.295 } 00:16:07.295 ] 00:16:07.295 }' 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76264 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76264 ']' 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76264 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.295 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76264 00:16:07.296 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.296 killing process with pid 76264 00:16:07.296 Received shutdown signal, test time was about 60.000000 seconds 00:16:07.296 00:16:07.296 Latency(us) 00:16:07.296 [2024-11-26T06:25:51.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.296 [2024-11-26T06:25:51.433Z] =================================================================================================================== 00:16:07.296 [2024-11-26T06:25:51.433Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.296 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.296 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76264' 00:16:07.296 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76264 00:16:07.296 [2024-11-26 06:25:51.286500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.296 06:25:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76264 00:16:07.296 [2024-11-26 06:25:51.286651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.296 [2024-11-26 06:25:51.286708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.296 [2024-11-26 06:25:51.286720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:07.570 [2024-11-26 06:25:51.645477] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:08.948 00:16:08.948 real 0m24.655s 00:16:08.948 user 0m29.470s 00:16:08.948 sys 0m4.395s 00:16:08.948 ************************************ 00:16:08.948 END TEST raid_rebuild_test_sb 00:16:08.948 ************************************ 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.948 06:25:52 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:08.948 06:25:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:08.948 06:25:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.948 06:25:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.948 ************************************ 00:16:08.948 START TEST raid_rebuild_test_io 00:16:08.948 ************************************ 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.948 06:25:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.948 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.948 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.948 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77005 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77005 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 77005 ']' 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.949 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.209 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.209 Zero copy mechanism will not be used. 00:16:09.209 [2024-11-26 06:25:53.098675] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:09.209 [2024-11-26 06:25:53.098793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77005 ] 00:16:09.209 [2024-11-26 06:25:53.272637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.468 [2024-11-26 06:25:53.400198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.727 [2024-11-26 06:25:53.622317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.727 [2024-11-26 06:25:53.622386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.986 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.986 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:09.986 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.986 06:25:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.986 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.986 06:25:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.986 BaseBdev1_malloc 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.986 [2024-11-26 06:25:54.034267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.986 [2024-11-26 06:25:54.034443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.986 [2024-11-26 06:25:54.034549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.986 [2024-11-26 06:25:54.034638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.986 [2024-11-26 06:25:54.036991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.986 [2024-11-26 06:25:54.037091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.986 BaseBdev1 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.986 BaseBdev2_malloc 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.986 [2024-11-26 06:25:54.091458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:09.986 [2024-11-26 06:25:54.091606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.986 [2024-11-26 06:25:54.091643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.986 [2024-11-26 06:25:54.091701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.986 [2024-11-26 06:25:54.093856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.986 [2024-11-26 06:25:54.093900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.986 BaseBdev2 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.986 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.246 spare_malloc 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.246 spare_delay 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.246 [2024-11-26 06:25:54.175194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.246 [2024-11-26 06:25:54.175267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.246 [2024-11-26 06:25:54.175288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:10.246 [2024-11-26 06:25:54.175298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.246 [2024-11-26 06:25:54.177418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.246 [2024-11-26 06:25:54.177461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.246 spare 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.246 [2024-11-26 06:25:54.187229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.246 [2024-11-26 06:25:54.189185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.246 [2024-11-26 06:25:54.189344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.246 [2024-11-26 06:25:54.189391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:10.246 [2024-11-26 06:25:54.189710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:10.246 [2024-11-26 06:25:54.189906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.246 [2024-11-26 06:25:54.189947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.246 [2024-11-26 06:25:54.190152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.246 "name": "raid_bdev1", 00:16:10.246 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:10.246 "strip_size_kb": 0, 00:16:10.246 "state": "online", 00:16:10.246 "raid_level": "raid1", 00:16:10.246 "superblock": false, 00:16:10.246 "num_base_bdevs": 2, 00:16:10.246 "num_base_bdevs_discovered": 2, 00:16:10.246 "num_base_bdevs_operational": 2, 00:16:10.246 "base_bdevs_list": [ 00:16:10.246 { 00:16:10.246 "name": "BaseBdev1", 00:16:10.246 "uuid": "70808cce-156c-5439-83fb-2c9de1e100bc", 00:16:10.246 "is_configured": true, 00:16:10.246 "data_offset": 0, 00:16:10.246 "data_size": 65536 00:16:10.246 }, 00:16:10.246 { 00:16:10.246 "name": "BaseBdev2", 00:16:10.246 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:10.246 "is_configured": true, 00:16:10.246 "data_offset": 0, 00:16:10.246 "data_size": 65536 00:16:10.246 } 00:16:10.246 ] 00:16:10.246 }' 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.246 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.505 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.505 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.505 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:10.505 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 [2024-11-26 06:25:54.638813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 [2024-11-26 06:25:54.734358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.765 "name": "raid_bdev1", 00:16:10.765 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:10.765 "strip_size_kb": 0, 00:16:10.765 "state": "online", 00:16:10.765 "raid_level": "raid1", 00:16:10.765 "superblock": false, 00:16:10.765 "num_base_bdevs": 2, 00:16:10.765 "num_base_bdevs_discovered": 1, 00:16:10.765 "num_base_bdevs_operational": 1, 00:16:10.765 "base_bdevs_list": [ 00:16:10.765 { 00:16:10.765 "name": null, 00:16:10.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.765 "is_configured": false, 00:16:10.765 "data_offset": 0, 00:16:10.765 "data_size": 65536 00:16:10.765 }, 00:16:10.765 { 00:16:10.765 "name": "BaseBdev2", 00:16:10.765 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:10.765 "is_configured": true, 00:16:10.765 "data_offset": 0, 00:16:10.765 "data_size": 65536 00:16:10.765 } 00:16:10.765 ] 00:16:10.765 }' 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.765 06:25:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.765 [2024-11-26 06:25:54.834414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:10.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:10.765 Zero copy mechanism will not be used. 00:16:10.765 Running I/O for 60 seconds... 00:16:11.334 06:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.334 06:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.334 06:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.334 [2024-11-26 06:25:55.189190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.334 06:25:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.334 06:25:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:11.334 [2024-11-26 06:25:55.270263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:11.334 [2024-11-26 06:25:55.272339] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.334 [2024-11-26 06:25:55.398565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:11.334 [2024-11-26 06:25:55.399202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:11.593 [2024-11-26 06:25:55.621770] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:11.593 [2024-11-26 06:25:55.622152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:11.852 163.00 IOPS, 489.00 MiB/s [2024-11-26T06:25:55.989Z] [2024-11-26 06:25:55.956524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:11.852 [2024-11-26 06:25:55.957147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.112 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.372 "name": "raid_bdev1", 00:16:12.372 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:12.372 "strip_size_kb": 0, 00:16:12.372 "state": "online", 00:16:12.372 "raid_level": "raid1", 00:16:12.372 "superblock": false, 00:16:12.372 "num_base_bdevs": 2, 00:16:12.372 "num_base_bdevs_discovered": 2, 00:16:12.372 "num_base_bdevs_operational": 2, 00:16:12.372 "process": { 00:16:12.372 "type": "rebuild", 00:16:12.372 "target": "spare", 00:16:12.372 "progress": { 00:16:12.372 "blocks": 10240, 00:16:12.372 "percent": 15 00:16:12.372 } 00:16:12.372 }, 00:16:12.372 "base_bdevs_list": [ 00:16:12.372 { 00:16:12.372 "name": "spare", 00:16:12.372 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:12.372 "is_configured": true, 00:16:12.372 "data_offset": 0, 00:16:12.372 "data_size": 65536 00:16:12.372 }, 00:16:12.372 { 00:16:12.372 "name": "BaseBdev2", 00:16:12.372 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:12.372 "is_configured": true, 00:16:12.372 "data_offset": 0, 00:16:12.372 "data_size": 65536 00:16:12.372 } 00:16:12.372 ] 00:16:12.372 }' 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.372 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.372 [2024-11-26 06:25:56.389008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.372 [2024-11-26 06:25:56.403474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:12.648 [2024-11-26 06:25:56.509647] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.648 [2024-11-26 06:25:56.518126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.648 [2024-11-26 06:25:56.518190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.648 [2024-11-26 06:25:56.518203] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.648 [2024-11-26 06:25:56.566502] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.648 "name": "raid_bdev1", 00:16:12.648 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:12.648 "strip_size_kb": 0, 00:16:12.648 "state": "online", 00:16:12.648 "raid_level": "raid1", 00:16:12.648 "superblock": false, 00:16:12.648 "num_base_bdevs": 2, 00:16:12.648 "num_base_bdevs_discovered": 1, 00:16:12.648 "num_base_bdevs_operational": 1, 00:16:12.648 "base_bdevs_list": [ 00:16:12.648 { 00:16:12.648 "name": null, 00:16:12.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.648 "is_configured": false, 00:16:12.648 "data_offset": 0, 00:16:12.648 "data_size": 65536 00:16:12.648 }, 00:16:12.648 { 00:16:12.648 "name": "BaseBdev2", 00:16:12.648 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:12.648 "is_configured": true, 00:16:12.648 "data_offset": 0, 00:16:12.648 "data_size": 65536 00:16:12.648 } 00:16:12.648 ] 00:16:12.648 }' 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.648 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.908 148.00 IOPS, 444.00 MiB/s [2024-11-26T06:25:57.045Z] 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.908 06:25:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.908 06:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.168 "name": "raid_bdev1", 00:16:13.168 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:13.168 "strip_size_kb": 0, 00:16:13.168 "state": "online", 00:16:13.168 "raid_level": "raid1", 00:16:13.168 "superblock": false, 00:16:13.168 "num_base_bdevs": 2, 00:16:13.168 "num_base_bdevs_discovered": 1, 00:16:13.168 "num_base_bdevs_operational": 1, 00:16:13.168 "base_bdevs_list": [ 00:16:13.168 { 00:16:13.168 "name": null, 00:16:13.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.168 "is_configured": false, 00:16:13.168 "data_offset": 0, 00:16:13.168 "data_size": 65536 00:16:13.168 }, 00:16:13.168 { 00:16:13.168 "name": "BaseBdev2", 00:16:13.168 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:13.168 "is_configured": true, 00:16:13.168 "data_offset": 0, 00:16:13.168 "data_size": 65536 00:16:13.168 } 00:16:13.168 ] 00:16:13.168 }' 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:13.168 [2024-11-26 06:25:57.127751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.168 06:25:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:13.168 [2024-11-26 06:25:57.196650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:13.168 [2024-11-26 06:25:57.198663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.427 [2024-11-26 06:25:57.312671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:13.427 [2024-11-26 06:25:57.313488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:13.427 [2024-11-26 06:25:57.535762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.427 [2024-11-26 06:25:57.536274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:13.996 159.00 IOPS, 477.00 MiB/s [2024-11-26T06:25:58.133Z] [2024-11-26 06:25:57.898886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:13.996 [2024-11-26 06:25:58.127278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:13.996 [2024-11-26 06:25:58.127784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.255 "name": "raid_bdev1", 00:16:14.255 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:14.255 "strip_size_kb": 0, 00:16:14.255 "state": "online", 00:16:14.255 "raid_level": "raid1", 00:16:14.255 "superblock": false, 00:16:14.255 "num_base_bdevs": 2, 00:16:14.255 "num_base_bdevs_discovered": 2, 00:16:14.255 "num_base_bdevs_operational": 2, 00:16:14.255 "process": { 00:16:14.255 "type": "rebuild", 00:16:14.255 "target": "spare", 00:16:14.255 "progress": { 00:16:14.255 "blocks": 10240, 00:16:14.255 "percent": 15 00:16:14.255 } 00:16:14.255 }, 00:16:14.255 "base_bdevs_list": [ 00:16:14.255 { 00:16:14.255 "name": "spare", 00:16:14.255 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:14.255 "is_configured": true, 00:16:14.255 "data_offset": 0, 00:16:14.255 "data_size": 65536 00:16:14.255 }, 00:16:14.255 { 00:16:14.255 "name": "BaseBdev2", 00:16:14.255 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:14.255 "is_configured": true, 00:16:14.255 "data_offset": 0, 00:16:14.255 "data_size": 65536 00:16:14.255 } 00:16:14.255 ] 00:16:14.255 }' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.255 06:25:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.516 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.516 "name": "raid_bdev1", 00:16:14.516 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:14.516 "strip_size_kb": 0, 00:16:14.516 "state": "online", 00:16:14.516 "raid_level": "raid1", 00:16:14.516 "superblock": false, 00:16:14.516 "num_base_bdevs": 2, 00:16:14.516 "num_base_bdevs_discovered": 2, 00:16:14.516 "num_base_bdevs_operational": 2, 00:16:14.516 "process": { 00:16:14.516 "type": "rebuild", 00:16:14.516 "target": "spare", 00:16:14.516 "progress": { 00:16:14.516 "blocks": 12288, 00:16:14.516 "percent": 18 00:16:14.516 } 00:16:14.516 }, 00:16:14.516 "base_bdevs_list": [ 00:16:14.516 { 00:16:14.516 "name": "spare", 00:16:14.516 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:14.516 "is_configured": true, 00:16:14.516 "data_offset": 0, 00:16:14.516 "data_size": 65536 00:16:14.516 }, 00:16:14.516 { 00:16:14.516 "name": "BaseBdev2", 00:16:14.516 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:14.516 "is_configured": true, 00:16:14.516 "data_offset": 0, 00:16:14.516 "data_size": 65536 00:16:14.516 } 00:16:14.516 ] 00:16:14.516 }' 00:16:14.516 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.516 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.516 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.516 [2024-11-26 06:25:58.479951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:14.516 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.516 06:25:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.516 [2024-11-26 06:25:58.590380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:14.775 [2024-11-26 06:25:58.822476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:15.034 131.75 IOPS, 395.25 MiB/s [2024-11-26T06:25:59.171Z] [2024-11-26 06:25:58.949180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:15.611 [2024-11-26 06:25:59.481580] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.611 "name": "raid_bdev1", 00:16:15.611 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:15.611 "strip_size_kb": 0, 00:16:15.611 "state": "online", 00:16:15.611 "raid_level": "raid1", 00:16:15.611 "superblock": false, 00:16:15.611 "num_base_bdevs": 2, 00:16:15.611 "num_base_bdevs_discovered": 2, 00:16:15.611 "num_base_bdevs_operational": 2, 00:16:15.611 "process": { 00:16:15.611 "type": "rebuild", 00:16:15.611 "target": "spare", 00:16:15.611 "progress": { 00:16:15.611 "blocks": 32768, 00:16:15.611 "percent": 50 00:16:15.611 } 00:16:15.611 }, 00:16:15.611 "base_bdevs_list": [ 00:16:15.611 { 00:16:15.611 "name": "spare", 00:16:15.611 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:15.611 "is_configured": true, 00:16:15.611 "data_offset": 0, 00:16:15.611 "data_size": 65536 00:16:15.611 }, 00:16:15.611 { 00:16:15.611 "name": "BaseBdev2", 00:16:15.611 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:15.611 "is_configured": true, 00:16:15.611 "data_offset": 0, 00:16:15.611 "data_size": 65536 00:16:15.611 } 00:16:15.611 ] 00:16:15.611 }' 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.611 06:25:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.611 [2024-11-26 06:25:59.683330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:15.611 [2024-11-26 06:25:59.683819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:15.882 115.20 IOPS, 345.60 MiB/s [2024-11-26T06:26:00.019Z] [2024-11-26 06:25:59.921911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:16.142 [2024-11-26 06:26:00.130815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:16.142 [2024-11-26 06:26:00.131334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.711 "name": "raid_bdev1", 00:16:16.711 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:16.711 "strip_size_kb": 0, 00:16:16.711 "state": "online", 00:16:16.711 "raid_level": "raid1", 00:16:16.711 "superblock": false, 00:16:16.711 "num_base_bdevs": 2, 00:16:16.711 "num_base_bdevs_discovered": 2, 00:16:16.711 "num_base_bdevs_operational": 2, 00:16:16.711 "process": { 00:16:16.711 "type": "rebuild", 00:16:16.711 "target": "spare", 00:16:16.711 "progress": { 00:16:16.711 "blocks": 49152, 00:16:16.711 "percent": 75 00:16:16.711 } 00:16:16.711 }, 00:16:16.711 "base_bdevs_list": [ 00:16:16.711 { 00:16:16.711 "name": "spare", 00:16:16.711 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:16.711 "is_configured": true, 00:16:16.711 "data_offset": 0, 00:16:16.711 "data_size": 65536 00:16:16.711 }, 00:16:16.711 { 00:16:16.711 "name": "BaseBdev2", 00:16:16.711 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:16.711 "is_configured": true, 00:16:16.711 "data_offset": 0, 00:16:16.711 "data_size": 65536 00:16:16.711 } 00:16:16.711 ] 00:16:16.711 }' 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.711 [2024-11-26 06:26:00.758202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.711 06:26:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.970 103.67 IOPS, 311.00 MiB/s [2024-11-26T06:26:01.107Z] [2024-11-26 06:26:00.974184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:17.539 [2024-11-26 06:26:01.638054] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:17.800 [2024-11-26 06:26:01.737865] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:17.800 [2024-11-26 06:26:01.740494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.800 93.71 IOPS, 281.14 MiB/s [2024-11-26T06:26:01.937Z] 06:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.800 "name": "raid_bdev1", 00:16:17.800 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:17.800 "strip_size_kb": 0, 00:16:17.800 "state": "online", 00:16:17.800 "raid_level": "raid1", 00:16:17.800 "superblock": false, 00:16:17.800 "num_base_bdevs": 2, 00:16:17.800 "num_base_bdevs_discovered": 2, 00:16:17.800 "num_base_bdevs_operational": 2, 00:16:17.800 "base_bdevs_list": [ 00:16:17.800 { 00:16:17.800 "name": "spare", 00:16:17.800 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:17.800 "is_configured": true, 00:16:17.800 "data_offset": 0, 00:16:17.800 "data_size": 65536 00:16:17.800 }, 00:16:17.800 { 00:16:17.800 "name": "BaseBdev2", 00:16:17.800 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:17.800 "is_configured": true, 00:16:17.800 "data_offset": 0, 00:16:17.800 "data_size": 65536 00:16:17.800 } 00:16:17.800 ] 00:16:17.800 }' 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:17.800 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.061 06:26:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.061 "name": "raid_bdev1", 00:16:18.061 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:18.061 "strip_size_kb": 0, 00:16:18.061 "state": "online", 00:16:18.061 "raid_level": "raid1", 00:16:18.061 "superblock": false, 00:16:18.061 "num_base_bdevs": 2, 00:16:18.061 "num_base_bdevs_discovered": 2, 00:16:18.061 "num_base_bdevs_operational": 2, 00:16:18.061 "base_bdevs_list": [ 00:16:18.061 { 00:16:18.061 "name": "spare", 00:16:18.061 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:18.061 "is_configured": true, 00:16:18.061 "data_offset": 0, 00:16:18.061 "data_size": 65536 00:16:18.061 }, 00:16:18.061 { 00:16:18.061 "name": "BaseBdev2", 00:16:18.061 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:18.061 "is_configured": true, 00:16:18.061 "data_offset": 0, 00:16:18.061 "data_size": 65536 00:16:18.061 } 00:16:18.061 ] 00:16:18.061 }' 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.061 "name": "raid_bdev1", 00:16:18.061 "uuid": "e8f5e614-25aa-40e9-b5d1-d6d9d2984998", 00:16:18.061 "strip_size_kb": 0, 00:16:18.061 "state": "online", 00:16:18.061 "raid_level": "raid1", 00:16:18.061 "superblock": false, 00:16:18.061 "num_base_bdevs": 2, 00:16:18.061 "num_base_bdevs_discovered": 2, 00:16:18.061 "num_base_bdevs_operational": 2, 00:16:18.061 "base_bdevs_list": [ 00:16:18.061 { 00:16:18.061 "name": "spare", 00:16:18.061 "uuid": "85395c27-0927-551c-99a1-5790fc6315a4", 00:16:18.061 "is_configured": true, 00:16:18.061 "data_offset": 0, 00:16:18.061 "data_size": 65536 00:16:18.061 }, 00:16:18.061 { 00:16:18.061 "name": "BaseBdev2", 00:16:18.061 "uuid": "b22aea08-280a-5858-95f3-168178847385", 00:16:18.061 "is_configured": true, 00:16:18.061 "data_offset": 0, 00:16:18.061 "data_size": 65536 00:16:18.061 } 00:16:18.061 ] 00:16:18.061 }' 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.061 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.631 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.631 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.631 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.631 [2024-11-26 06:26:02.587317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.631 [2024-11-26 06:26:02.587364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.631 00:16:18.631 Latency(us) 00:16:18.631 [2024-11-26T06:26:02.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.631 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:18.631 raid_bdev1 : 7.81 87.54 262.62 0.00 0.00 14932.11 334.48 114473.36 00:16:18.631 [2024-11-26T06:26:02.768Z] =================================================================================================================== 00:16:18.631 [2024-11-26T06:26:02.768Z] Total : 87.54 262.62 0.00 0.00 14932.11 334.48 114473.36 00:16:18.631 { 00:16:18.631 "results": [ 00:16:18.631 { 00:16:18.631 "job": "raid_bdev1", 00:16:18.631 "core_mask": "0x1", 00:16:18.631 "workload": "randrw", 00:16:18.631 "percentage": 50, 00:16:18.631 "status": "finished", 00:16:18.631 "queue_depth": 2, 00:16:18.631 "io_size": 3145728, 00:16:18.631 "runtime": 7.813706, 00:16:18.631 "iops": 87.53848685886057, 00:16:18.631 "mibps": 262.6154605765817, 00:16:18.631 "io_failed": 0, 00:16:18.631 "io_timeout": 0, 00:16:18.631 "avg_latency_us": 14932.110830205062, 00:16:18.631 "min_latency_us": 334.4768558951965, 00:16:18.631 "max_latency_us": 114473.36244541485 00:16:18.632 } 00:16:18.632 ], 00:16:18.632 "core_count": 1 00:16:18.632 } 00:16:18.632 [2024-11-26 06:26:02.657200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.632 [2024-11-26 06:26:02.657256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.632 [2024-11-26 06:26:02.657341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.632 [2024-11-26 06:26:02.657351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.632 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:18.891 /dev/nbd0 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:18.891 1+0 records in 00:16:18.891 1+0 records out 00:16:18.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346756 s, 11.8 MB/s 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:18.891 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:18.892 06:26:02 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:19.162 /dev/nbd1 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.162 1+0 records in 00:16:19.162 1+0 records out 00:16:19.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514526 s, 8.0 MB/s 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:19.162 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.438 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.697 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77005 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 77005 ']' 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 77005 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.957 06:26:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77005 00:16:19.957 06:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.957 06:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.957 killing process with pid 77005 00:16:19.957 06:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77005' 00:16:19.957 06:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 77005 00:16:19.957 Received shutdown signal, test time was about 9.207495 seconds 00:16:19.957 00:16:19.957 Latency(us) 00:16:19.957 [2024-11-26T06:26:04.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.957 [2024-11-26T06:26:04.094Z] =================================================================================================================== 00:16:19.957 [2024-11-26T06:26:04.094Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.957 [2024-11-26 06:26:04.026281] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.957 06:26:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 77005 00:16:20.217 [2024-11-26 06:26:04.280780] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:21.598 00:16:21.598 real 0m12.569s 00:16:21.598 user 0m15.817s 00:16:21.598 sys 0m1.622s 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.598 ************************************ 00:16:21.598 END TEST raid_rebuild_test_io 00:16:21.598 ************************************ 00:16:21.598 06:26:05 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:16:21.598 06:26:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:21.598 06:26:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.598 06:26:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.598 ************************************ 00:16:21.598 START TEST raid_rebuild_test_sb_io 00:16:21.598 ************************************ 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77382 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77382 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77382 ']' 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.598 06:26:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.856 [2024-11-26 06:26:05.752201] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:21.856 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:21.856 Zero copy mechanism will not be used. 00:16:21.856 [2024-11-26 06:26:05.752342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77382 ] 00:16:21.856 [2024-11-26 06:26:05.939090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.115 [2024-11-26 06:26:06.060740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.373 [2024-11-26 06:26:06.281232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.373 [2024-11-26 06:26:06.281305] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.633 BaseBdev1_malloc 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.633 [2024-11-26 06:26:06.674928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:22.633 [2024-11-26 06:26:06.675016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.633 [2024-11-26 06:26:06.675046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:22.633 [2024-11-26 06:26:06.675074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.633 [2024-11-26 06:26:06.677400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.633 [2024-11-26 06:26:06.677446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:22.633 BaseBdev1 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.633 BaseBdev2_malloc 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.633 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.634 [2024-11-26 06:26:06.732040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:22.634 [2024-11-26 06:26:06.732132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.634 [2024-11-26 06:26:06.732152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:22.634 [2024-11-26 06:26:06.732166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.634 [2024-11-26 06:26:06.734307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.634 [2024-11-26 06:26:06.734347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.634 BaseBdev2 00:16:22.634 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.634 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:22.634 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.634 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.893 spare_malloc 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.893 spare_delay 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.893 [2024-11-26 06:26:06.808237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.893 [2024-11-26 06:26:06.808303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.893 [2024-11-26 06:26:06.808323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:22.893 [2024-11-26 06:26:06.808336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.893 [2024-11-26 06:26:06.810512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.893 [2024-11-26 06:26:06.810555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.893 spare 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.893 [2024-11-26 06:26:06.820330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.893 [2024-11-26 06:26:06.822154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.893 [2024-11-26 06:26:06.822330] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:22.893 [2024-11-26 06:26:06.822354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.893 [2024-11-26 06:26:06.822642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:22.893 [2024-11-26 06:26:06.822824] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:22.893 [2024-11-26 06:26:06.822842] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:22.893 [2024-11-26 06:26:06.823013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.893 "name": "raid_bdev1", 00:16:22.893 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:22.893 "strip_size_kb": 0, 00:16:22.893 "state": "online", 00:16:22.893 "raid_level": "raid1", 00:16:22.893 "superblock": true, 00:16:22.893 "num_base_bdevs": 2, 00:16:22.893 "num_base_bdevs_discovered": 2, 00:16:22.893 "num_base_bdevs_operational": 2, 00:16:22.893 "base_bdevs_list": [ 00:16:22.893 { 00:16:22.893 "name": "BaseBdev1", 00:16:22.893 "uuid": "97922d53-be8d-59dc-86a0-67e706dfd44b", 00:16:22.893 "is_configured": true, 00:16:22.893 "data_offset": 2048, 00:16:22.893 "data_size": 63488 00:16:22.893 }, 00:16:22.893 { 00:16:22.893 "name": "BaseBdev2", 00:16:22.893 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:22.893 "is_configured": true, 00:16:22.893 "data_offset": 2048, 00:16:22.893 "data_size": 63488 00:16:22.893 } 00:16:22.893 ] 00:16:22.893 }' 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.893 06:26:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.153 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.153 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.153 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.153 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:23.153 [2024-11-26 06:26:07.283830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.413 [2024-11-26 06:26:07.395289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.413 "name": "raid_bdev1", 00:16:23.413 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:23.413 "strip_size_kb": 0, 00:16:23.413 "state": "online", 00:16:23.413 "raid_level": "raid1", 00:16:23.413 "superblock": true, 00:16:23.413 "num_base_bdevs": 2, 00:16:23.413 "num_base_bdevs_discovered": 1, 00:16:23.413 "num_base_bdevs_operational": 1, 00:16:23.413 "base_bdevs_list": [ 00:16:23.413 { 00:16:23.413 "name": null, 00:16:23.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.413 "is_configured": false, 00:16:23.413 "data_offset": 0, 00:16:23.413 "data_size": 63488 00:16:23.413 }, 00:16:23.413 { 00:16:23.413 "name": "BaseBdev2", 00:16:23.413 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:23.413 "is_configured": true, 00:16:23.413 "data_offset": 2048, 00:16:23.413 "data_size": 63488 00:16:23.413 } 00:16:23.413 ] 00:16:23.413 }' 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.413 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.413 [2024-11-26 06:26:07.480374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:23.413 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:23.413 Zero copy mechanism will not be used. 00:16:23.413 Running I/O for 60 seconds... 00:16:23.981 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:23.981 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.981 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.981 [2024-11-26 06:26:07.838926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:23.981 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.981 06:26:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:23.981 [2024-11-26 06:26:07.901625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:23.981 [2024-11-26 06:26:07.903525] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.981 [2024-11-26 06:26:08.018116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:23.981 [2024-11-26 06:26:08.018769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:24.240 [2024-11-26 06:26:08.233293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:24.240 [2024-11-26 06:26:08.233683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:24.499 202.00 IOPS, 606.00 MiB/s [2024-11-26T06:26:08.636Z] [2024-11-26 06:26:08.589813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:24.758 [2024-11-26 06:26:08.820593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:24.758 [2024-11-26 06:26:08.821024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:24.758 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.758 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.758 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.759 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.759 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.018 "name": "raid_bdev1", 00:16:25.018 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:25.018 "strip_size_kb": 0, 00:16:25.018 "state": "online", 00:16:25.018 "raid_level": "raid1", 00:16:25.018 "superblock": true, 00:16:25.018 "num_base_bdevs": 2, 00:16:25.018 "num_base_bdevs_discovered": 2, 00:16:25.018 "num_base_bdevs_operational": 2, 00:16:25.018 "process": { 00:16:25.018 "type": "rebuild", 00:16:25.018 "target": "spare", 00:16:25.018 "progress": { 00:16:25.018 "blocks": 10240, 00:16:25.018 "percent": 16 00:16:25.018 } 00:16:25.018 }, 00:16:25.018 "base_bdevs_list": [ 00:16:25.018 { 00:16:25.018 "name": "spare", 00:16:25.018 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:25.018 "is_configured": true, 00:16:25.018 "data_offset": 2048, 00:16:25.018 "data_size": 63488 00:16:25.018 }, 00:16:25.018 { 00:16:25.018 "name": "BaseBdev2", 00:16:25.018 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:25.018 "is_configured": true, 00:16:25.018 "data_offset": 2048, 00:16:25.018 "data_size": 63488 00:16:25.018 } 00:16:25.018 ] 00:16:25.018 }' 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.018 06:26:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.018 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.018 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.018 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.018 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.018 [2024-11-26 06:26:09.044338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.277 [2024-11-26 06:26:09.261566] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:25.277 [2024-11-26 06:26:09.276077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.277 [2024-11-26 06:26:09.276143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.277 [2024-11-26 06:26:09.276157] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:25.277 [2024-11-26 06:26:09.316158] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.277 "name": "raid_bdev1", 00:16:25.277 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:25.277 "strip_size_kb": 0, 00:16:25.277 "state": "online", 00:16:25.277 "raid_level": "raid1", 00:16:25.277 "superblock": true, 00:16:25.277 "num_base_bdevs": 2, 00:16:25.277 "num_base_bdevs_discovered": 1, 00:16:25.277 "num_base_bdevs_operational": 1, 00:16:25.277 "base_bdevs_list": [ 00:16:25.277 { 00:16:25.277 "name": null, 00:16:25.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.277 "is_configured": false, 00:16:25.277 "data_offset": 0, 00:16:25.277 "data_size": 63488 00:16:25.277 }, 00:16:25.277 { 00:16:25.277 "name": "BaseBdev2", 00:16:25.277 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:25.277 "is_configured": true, 00:16:25.277 "data_offset": 2048, 00:16:25.277 "data_size": 63488 00:16:25.277 } 00:16:25.277 ] 00:16:25.277 }' 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.277 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.796 164.50 IOPS, 493.50 MiB/s [2024-11-26T06:26:09.933Z] 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.796 "name": "raid_bdev1", 00:16:25.796 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:25.796 "strip_size_kb": 0, 00:16:25.796 "state": "online", 00:16:25.796 "raid_level": "raid1", 00:16:25.796 "superblock": true, 00:16:25.796 "num_base_bdevs": 2, 00:16:25.796 "num_base_bdevs_discovered": 1, 00:16:25.796 "num_base_bdevs_operational": 1, 00:16:25.796 "base_bdevs_list": [ 00:16:25.796 { 00:16:25.796 "name": null, 00:16:25.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.796 "is_configured": false, 00:16:25.796 "data_offset": 0, 00:16:25.796 "data_size": 63488 00:16:25.796 }, 00:16:25.796 { 00:16:25.796 "name": "BaseBdev2", 00:16:25.796 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:25.796 "is_configured": true, 00:16:25.796 "data_offset": 2048, 00:16:25.796 "data_size": 63488 00:16:25.796 } 00:16:25.796 ] 00:16:25.796 }' 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.796 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.796 [2024-11-26 06:26:09.917033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.056 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.056 06:26:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:26.056 [2024-11-26 06:26:09.986230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:26.056 [2024-11-26 06:26:09.988453] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.056 [2024-11-26 06:26:10.104600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:26.056 [2024-11-26 06:26:10.105266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:26.316 [2024-11-26 06:26:10.343736] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:26.316 [2024-11-26 06:26:10.344160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:26.574 169.00 IOPS, 507.00 MiB/s [2024-11-26T06:26:10.711Z] [2024-11-26 06:26:10.705257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:26.833 [2024-11-26 06:26:10.922422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:27.092 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.092 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.092 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.092 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.092 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.092 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.093 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.093 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.093 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.093 06:26:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.093 "name": "raid_bdev1", 00:16:27.093 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:27.093 "strip_size_kb": 0, 00:16:27.093 "state": "online", 00:16:27.093 "raid_level": "raid1", 00:16:27.093 "superblock": true, 00:16:27.093 "num_base_bdevs": 2, 00:16:27.093 "num_base_bdevs_discovered": 2, 00:16:27.093 "num_base_bdevs_operational": 2, 00:16:27.093 "process": { 00:16:27.093 "type": "rebuild", 00:16:27.093 "target": "spare", 00:16:27.093 "progress": { 00:16:27.093 "blocks": 10240, 00:16:27.093 "percent": 16 00:16:27.093 } 00:16:27.093 }, 00:16:27.093 "base_bdevs_list": [ 00:16:27.093 { 00:16:27.093 "name": "spare", 00:16:27.093 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 }, 00:16:27.093 { 00:16:27.093 "name": "BaseBdev2", 00:16:27.093 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 } 00:16:27.093 ] 00:16:27.093 }' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:27.093 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=444 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.093 [2024-11-26 06:26:11.168449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.093 "name": "raid_bdev1", 00:16:27.093 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:27.093 "strip_size_kb": 0, 00:16:27.093 "state": "online", 00:16:27.093 "raid_level": "raid1", 00:16:27.093 "superblock": true, 00:16:27.093 "num_base_bdevs": 2, 00:16:27.093 "num_base_bdevs_discovered": 2, 00:16:27.093 "num_base_bdevs_operational": 2, 00:16:27.093 "process": { 00:16:27.093 "type": "rebuild", 00:16:27.093 "target": "spare", 00:16:27.093 "progress": { 00:16:27.093 "blocks": 12288, 00:16:27.093 "percent": 19 00:16:27.093 } 00:16:27.093 }, 00:16:27.093 "base_bdevs_list": [ 00:16:27.093 { 00:16:27.093 "name": "spare", 00:16:27.093 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 }, 00:16:27.093 { 00:16:27.093 "name": "BaseBdev2", 00:16:27.093 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:27.093 "is_configured": true, 00:16:27.093 "data_offset": 2048, 00:16:27.093 "data_size": 63488 00:16:27.093 } 00:16:27.093 ] 00:16:27.093 }' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.093 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.353 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.353 06:26:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.353 [2024-11-26 06:26:11.276579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:27.353 [2024-11-26 06:26:11.276990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:27.612 147.75 IOPS, 443.25 MiB/s [2024-11-26T06:26:11.749Z] [2024-11-26 06:26:11.633886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:27.871 [2024-11-26 06:26:11.858416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:28.130 [2024-11-26 06:26:12.076884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:28.130 [2024-11-26 06:26:12.077571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.391 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.391 "name": "raid_bdev1", 00:16:28.391 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:28.391 "strip_size_kb": 0, 00:16:28.391 "state": "online", 00:16:28.391 "raid_level": "raid1", 00:16:28.392 "superblock": true, 00:16:28.392 "num_base_bdevs": 2, 00:16:28.392 "num_base_bdevs_discovered": 2, 00:16:28.392 "num_base_bdevs_operational": 2, 00:16:28.392 "process": { 00:16:28.392 "type": "rebuild", 00:16:28.392 "target": "spare", 00:16:28.392 "progress": { 00:16:28.392 "blocks": 28672, 00:16:28.392 "percent": 45 00:16:28.392 } 00:16:28.392 }, 00:16:28.392 "base_bdevs_list": [ 00:16:28.392 { 00:16:28.392 "name": "spare", 00:16:28.392 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:28.392 "is_configured": true, 00:16:28.392 "data_offset": 2048, 00:16:28.392 "data_size": 63488 00:16:28.392 }, 00:16:28.392 { 00:16:28.392 "name": "BaseBdev2", 00:16:28.392 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:28.392 "is_configured": true, 00:16:28.392 "data_offset": 2048, 00:16:28.392 "data_size": 63488 00:16:28.392 } 00:16:28.392 ] 00:16:28.392 }' 00:16:28.392 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.392 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.392 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.392 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.392 06:26:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.392 [2024-11-26 06:26:12.421385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:28.652 135.20 IOPS, 405.60 MiB/s [2024-11-26T06:26:12.789Z] [2024-11-26 06:26:12.631268] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:28.911 [2024-11-26 06:26:12.856930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:16:29.171 [2024-11-26 06:26:13.060732] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.431 "name": "raid_bdev1", 00:16:29.431 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:29.431 "strip_size_kb": 0, 00:16:29.431 "state": "online", 00:16:29.431 "raid_level": "raid1", 00:16:29.431 "superblock": true, 00:16:29.431 "num_base_bdevs": 2, 00:16:29.431 "num_base_bdevs_discovered": 2, 00:16:29.431 "num_base_bdevs_operational": 2, 00:16:29.431 "process": { 00:16:29.431 "type": "rebuild", 00:16:29.431 "target": "spare", 00:16:29.431 "progress": { 00:16:29.431 "blocks": 47104, 00:16:29.431 "percent": 74 00:16:29.431 } 00:16:29.431 }, 00:16:29.431 "base_bdevs_list": [ 00:16:29.431 { 00:16:29.431 "name": "spare", 00:16:29.431 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:29.431 "is_configured": true, 00:16:29.431 "data_offset": 2048, 00:16:29.431 "data_size": 63488 00:16:29.431 }, 00:16:29.431 { 00:16:29.431 "name": "BaseBdev2", 00:16:29.431 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:29.431 "is_configured": true, 00:16:29.431 "data_offset": 2048, 00:16:29.431 "data_size": 63488 00:16:29.431 } 00:16:29.431 ] 00:16:29.431 }' 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.431 119.83 IOPS, 359.50 MiB/s [2024-11-26T06:26:13.568Z] 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.431 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.691 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.691 06:26:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.691 [2024-11-26 06:26:13.626761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:30.259 [2024-11-26 06:26:14.280596] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.259 [2024-11-26 06:26:14.386677] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.259 [2024-11-26 06:26:14.390089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.522 107.14 IOPS, 321.43 MiB/s [2024-11-26T06:26:14.659Z] 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.522 "name": "raid_bdev1", 00:16:30.522 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:30.522 "strip_size_kb": 0, 00:16:30.522 "state": "online", 00:16:30.522 "raid_level": "raid1", 00:16:30.522 "superblock": true, 00:16:30.522 "num_base_bdevs": 2, 00:16:30.522 "num_base_bdevs_discovered": 2, 00:16:30.522 "num_base_bdevs_operational": 2, 00:16:30.522 "base_bdevs_list": [ 00:16:30.522 { 00:16:30.522 "name": "spare", 00:16:30.522 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:30.522 "is_configured": true, 00:16:30.522 "data_offset": 2048, 00:16:30.522 "data_size": 63488 00:16:30.522 }, 00:16:30.522 { 00:16:30.522 "name": "BaseBdev2", 00:16:30.522 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:30.522 "is_configured": true, 00:16:30.522 "data_offset": 2048, 00:16:30.522 "data_size": 63488 00:16:30.522 } 00:16:30.522 ] 00:16:30.522 }' 00:16:30.522 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.781 "name": "raid_bdev1", 00:16:30.781 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:30.781 "strip_size_kb": 0, 00:16:30.781 "state": "online", 00:16:30.781 "raid_level": "raid1", 00:16:30.781 "superblock": true, 00:16:30.781 "num_base_bdevs": 2, 00:16:30.781 "num_base_bdevs_discovered": 2, 00:16:30.781 "num_base_bdevs_operational": 2, 00:16:30.781 "base_bdevs_list": [ 00:16:30.781 { 00:16:30.781 "name": "spare", 00:16:30.781 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:30.781 "is_configured": true, 00:16:30.781 "data_offset": 2048, 00:16:30.781 "data_size": 63488 00:16:30.781 }, 00:16:30.781 { 00:16:30.781 "name": "BaseBdev2", 00:16:30.781 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:30.781 "is_configured": true, 00:16:30.781 "data_offset": 2048, 00:16:30.781 "data_size": 63488 00:16:30.781 } 00:16:30.781 ] 00:16:30.781 }' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.781 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.041 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.041 "name": "raid_bdev1", 00:16:31.041 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:31.041 "strip_size_kb": 0, 00:16:31.041 "state": "online", 00:16:31.041 "raid_level": "raid1", 00:16:31.041 "superblock": true, 00:16:31.041 "num_base_bdevs": 2, 00:16:31.041 "num_base_bdevs_discovered": 2, 00:16:31.041 "num_base_bdevs_operational": 2, 00:16:31.041 "base_bdevs_list": [ 00:16:31.041 { 00:16:31.041 "name": "spare", 00:16:31.041 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:31.041 "is_configured": true, 00:16:31.041 "data_offset": 2048, 00:16:31.041 "data_size": 63488 00:16:31.041 }, 00:16:31.041 { 00:16:31.041 "name": "BaseBdev2", 00:16:31.041 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:31.041 "is_configured": true, 00:16:31.041 "data_offset": 2048, 00:16:31.041 "data_size": 63488 00:16:31.041 } 00:16:31.041 ] 00:16:31.041 }' 00:16:31.041 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.041 06:26:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.301 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.301 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.301 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.301 [2024-11-26 06:26:15.383000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.301 [2024-11-26 06:26:15.383156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.561 00:16:31.561 Latency(us) 00:16:31.561 [2024-11-26T06:26:15.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.561 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:31.561 raid_bdev1 : 8.00 99.96 299.87 0.00 0.00 12817.42 304.07 116304.94 00:16:31.561 [2024-11-26T06:26:15.698Z] =================================================================================================================== 00:16:31.561 [2024-11-26T06:26:15.698Z] Total : 99.96 299.87 0.00 0.00 12817.42 304.07 116304.94 00:16:31.561 [2024-11-26 06:26:15.496286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.561 [2024-11-26 06:26:15.496462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.561 [2024-11-26 06:26:15.496601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.561 [2024-11-26 06:26:15.496670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:31.561 { 00:16:31.561 "results": [ 00:16:31.561 { 00:16:31.561 "job": "raid_bdev1", 00:16:31.561 "core_mask": "0x1", 00:16:31.561 "workload": "randrw", 00:16:31.561 "percentage": 50, 00:16:31.561 "status": "finished", 00:16:31.561 "queue_depth": 2, 00:16:31.561 "io_size": 3145728, 00:16:31.561 "runtime": 8.003592, 00:16:31.561 "iops": 99.95512015105217, 00:16:31.561 "mibps": 299.86536045315654, 00:16:31.561 "io_failed": 0, 00:16:31.561 "io_timeout": 0, 00:16:31.561 "avg_latency_us": 12817.419179039302, 00:16:31.561 "min_latency_us": 304.0698689956332, 00:16:31.561 "max_latency_us": 116304.93624454149 00:16:31.561 } 00:16:31.561 ], 00:16:31.561 "core_count": 1 00:16:31.561 } 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.561 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:31.821 /dev/nbd0 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.821 1+0 records in 00:16:31.821 1+0 records out 00:16:31.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554061 s, 7.4 MB/s 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:31.821 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.822 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:31.822 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.822 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:31.822 06:26:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:32.082 /dev/nbd1 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.082 1+0 records in 00:16:32.082 1+0 records out 00:16:32.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426907 s, 9.6 MB/s 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.082 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.342 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.601 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.861 [2024-11-26 06:26:16.741584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.861 [2024-11-26 06:26:16.741683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.861 [2024-11-26 06:26:16.741705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:32.861 [2024-11-26 06:26:16.741716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.861 [2024-11-26 06:26:16.743999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.861 spare 00:16:32.861 [2024-11-26 06:26:16.744166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.861 [2024-11-26 06:26:16.744283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:32.861 [2024-11-26 06:26:16.744359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.861 [2024-11-26 06:26:16.744534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.861 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.861 [2024-11-26 06:26:16.844480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:32.861 [2024-11-26 06:26:16.844628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:32.862 [2024-11-26 06:26:16.845041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:32.862 [2024-11-26 06:26:16.845283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:32.862 [2024-11-26 06:26:16.845299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:32.862 [2024-11-26 06:26:16.845552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.862 "name": "raid_bdev1", 00:16:32.862 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:32.862 "strip_size_kb": 0, 00:16:32.862 "state": "online", 00:16:32.862 "raid_level": "raid1", 00:16:32.862 "superblock": true, 00:16:32.862 "num_base_bdevs": 2, 00:16:32.862 "num_base_bdevs_discovered": 2, 00:16:32.862 "num_base_bdevs_operational": 2, 00:16:32.862 "base_bdevs_list": [ 00:16:32.862 { 00:16:32.862 "name": "spare", 00:16:32.862 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:32.862 "is_configured": true, 00:16:32.862 "data_offset": 2048, 00:16:32.862 "data_size": 63488 00:16:32.862 }, 00:16:32.862 { 00:16:32.862 "name": "BaseBdev2", 00:16:32.862 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:32.862 "is_configured": true, 00:16:32.862 "data_offset": 2048, 00:16:32.862 "data_size": 63488 00:16:32.862 } 00:16:32.862 ] 00:16:32.862 }' 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.862 06:26:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.441 "name": "raid_bdev1", 00:16:33.441 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:33.441 "strip_size_kb": 0, 00:16:33.441 "state": "online", 00:16:33.441 "raid_level": "raid1", 00:16:33.441 "superblock": true, 00:16:33.441 "num_base_bdevs": 2, 00:16:33.441 "num_base_bdevs_discovered": 2, 00:16:33.441 "num_base_bdevs_operational": 2, 00:16:33.441 "base_bdevs_list": [ 00:16:33.441 { 00:16:33.441 "name": "spare", 00:16:33.441 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:33.441 "is_configured": true, 00:16:33.441 "data_offset": 2048, 00:16:33.441 "data_size": 63488 00:16:33.441 }, 00:16:33.441 { 00:16:33.441 "name": "BaseBdev2", 00:16:33.441 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:33.441 "is_configured": true, 00:16:33.441 "data_offset": 2048, 00:16:33.441 "data_size": 63488 00:16:33.441 } 00:16:33.441 ] 00:16:33.441 }' 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.441 [2024-11-26 06:26:17.516623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.441 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.701 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.701 "name": "raid_bdev1", 00:16:33.701 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:33.701 "strip_size_kb": 0, 00:16:33.701 "state": "online", 00:16:33.701 "raid_level": "raid1", 00:16:33.701 "superblock": true, 00:16:33.701 "num_base_bdevs": 2, 00:16:33.701 "num_base_bdevs_discovered": 1, 00:16:33.701 "num_base_bdevs_operational": 1, 00:16:33.701 "base_bdevs_list": [ 00:16:33.701 { 00:16:33.701 "name": null, 00:16:33.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.701 "is_configured": false, 00:16:33.701 "data_offset": 0, 00:16:33.701 "data_size": 63488 00:16:33.701 }, 00:16:33.701 { 00:16:33.701 "name": "BaseBdev2", 00:16:33.701 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:33.701 "is_configured": true, 00:16:33.701 "data_offset": 2048, 00:16:33.701 "data_size": 63488 00:16:33.701 } 00:16:33.701 ] 00:16:33.701 }' 00:16:33.701 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.701 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.961 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.961 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.961 06:26:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.961 [2024-11-26 06:26:18.004158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.961 [2024-11-26 06:26:18.004556] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.961 [2024-11-26 06:26:18.004632] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:33.961 [2024-11-26 06:26:18.004733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.961 [2024-11-26 06:26:18.022666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:33.961 06:26:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.961 06:26:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:33.961 [2024-11-26 06:26:18.024757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.902 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.902 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.902 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.902 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.902 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.161 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.161 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.161 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.162 "name": "raid_bdev1", 00:16:35.162 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:35.162 "strip_size_kb": 0, 00:16:35.162 "state": "online", 00:16:35.162 "raid_level": "raid1", 00:16:35.162 "superblock": true, 00:16:35.162 "num_base_bdevs": 2, 00:16:35.162 "num_base_bdevs_discovered": 2, 00:16:35.162 "num_base_bdevs_operational": 2, 00:16:35.162 "process": { 00:16:35.162 "type": "rebuild", 00:16:35.162 "target": "spare", 00:16:35.162 "progress": { 00:16:35.162 "blocks": 20480, 00:16:35.162 "percent": 32 00:16:35.162 } 00:16:35.162 }, 00:16:35.162 "base_bdevs_list": [ 00:16:35.162 { 00:16:35.162 "name": "spare", 00:16:35.162 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:35.162 "is_configured": true, 00:16:35.162 "data_offset": 2048, 00:16:35.162 "data_size": 63488 00:16:35.162 }, 00:16:35.162 { 00:16:35.162 "name": "BaseBdev2", 00:16:35.162 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:35.162 "is_configured": true, 00:16:35.162 "data_offset": 2048, 00:16:35.162 "data_size": 63488 00:16:35.162 } 00:16:35.162 ] 00:16:35.162 }' 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.162 [2024-11-26 06:26:19.172750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.162 [2024-11-26 06:26:19.231115] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.162 [2024-11-26 06:26:19.231204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.162 [2024-11-26 06:26:19.231223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.162 [2024-11-26 06:26:19.231231] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.162 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.420 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.420 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.420 "name": "raid_bdev1", 00:16:35.420 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:35.420 "strip_size_kb": 0, 00:16:35.420 "state": "online", 00:16:35.420 "raid_level": "raid1", 00:16:35.420 "superblock": true, 00:16:35.420 "num_base_bdevs": 2, 00:16:35.420 "num_base_bdevs_discovered": 1, 00:16:35.420 "num_base_bdevs_operational": 1, 00:16:35.420 "base_bdevs_list": [ 00:16:35.420 { 00:16:35.420 "name": null, 00:16:35.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.420 "is_configured": false, 00:16:35.420 "data_offset": 0, 00:16:35.420 "data_size": 63488 00:16:35.420 }, 00:16:35.420 { 00:16:35.420 "name": "BaseBdev2", 00:16:35.420 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:35.420 "is_configured": true, 00:16:35.420 "data_offset": 2048, 00:16:35.420 "data_size": 63488 00:16:35.420 } 00:16:35.420 ] 00:16:35.420 }' 00:16:35.420 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.420 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.679 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.679 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.679 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.679 [2024-11-26 06:26:19.745777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.679 [2024-11-26 06:26:19.746005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.679 [2024-11-26 06:26:19.746082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:35.679 [2024-11-26 06:26:19.746117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.679 [2024-11-26 06:26:19.746709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.679 [2024-11-26 06:26:19.746769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.679 [2024-11-26 06:26:19.746928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.679 [2024-11-26 06:26:19.746970] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.679 [2024-11-26 06:26:19.747037] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.679 [2024-11-26 06:26:19.747112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.679 [2024-11-26 06:26:19.764069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:35.679 spare 00:16:35.679 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.679 06:26:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:35.679 [2024-11-26 06:26:19.766191] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.056 "name": "raid_bdev1", 00:16:37.056 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:37.056 "strip_size_kb": 0, 00:16:37.056 "state": "online", 00:16:37.056 "raid_level": "raid1", 00:16:37.056 "superblock": true, 00:16:37.056 "num_base_bdevs": 2, 00:16:37.056 "num_base_bdevs_discovered": 2, 00:16:37.056 "num_base_bdevs_operational": 2, 00:16:37.056 "process": { 00:16:37.056 "type": "rebuild", 00:16:37.056 "target": "spare", 00:16:37.056 "progress": { 00:16:37.056 "blocks": 20480, 00:16:37.056 "percent": 32 00:16:37.056 } 00:16:37.056 }, 00:16:37.056 "base_bdevs_list": [ 00:16:37.056 { 00:16:37.056 "name": "spare", 00:16:37.056 "uuid": "a48af4a1-8111-5f5a-a69c-050a95b160f6", 00:16:37.056 "is_configured": true, 00:16:37.056 "data_offset": 2048, 00:16:37.056 "data_size": 63488 00:16:37.056 }, 00:16:37.056 { 00:16:37.056 "name": "BaseBdev2", 00:16:37.056 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:37.056 "is_configured": true, 00:16:37.056 "data_offset": 2048, 00:16:37.056 "data_size": 63488 00:16:37.056 } 00:16:37.056 ] 00:16:37.056 }' 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.056 06:26:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.056 [2024-11-26 06:26:20.930091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.056 [2024-11-26 06:26:20.972497] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.056 [2024-11-26 06:26:20.972682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.056 [2024-11-26 06:26:20.972700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.056 [2024-11-26 06:26:20.972714] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.056 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.056 "name": "raid_bdev1", 00:16:37.056 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:37.056 "strip_size_kb": 0, 00:16:37.056 "state": "online", 00:16:37.056 "raid_level": "raid1", 00:16:37.056 "superblock": true, 00:16:37.056 "num_base_bdevs": 2, 00:16:37.056 "num_base_bdevs_discovered": 1, 00:16:37.056 "num_base_bdevs_operational": 1, 00:16:37.056 "base_bdevs_list": [ 00:16:37.056 { 00:16:37.056 "name": null, 00:16:37.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.057 "is_configured": false, 00:16:37.057 "data_offset": 0, 00:16:37.057 "data_size": 63488 00:16:37.057 }, 00:16:37.057 { 00:16:37.057 "name": "BaseBdev2", 00:16:37.057 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:37.057 "is_configured": true, 00:16:37.057 "data_offset": 2048, 00:16:37.057 "data_size": 63488 00:16:37.057 } 00:16:37.057 ] 00:16:37.057 }' 00:16:37.057 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.057 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.316 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.575 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.575 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.575 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.575 "name": "raid_bdev1", 00:16:37.575 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:37.575 "strip_size_kb": 0, 00:16:37.575 "state": "online", 00:16:37.575 "raid_level": "raid1", 00:16:37.575 "superblock": true, 00:16:37.575 "num_base_bdevs": 2, 00:16:37.575 "num_base_bdevs_discovered": 1, 00:16:37.575 "num_base_bdevs_operational": 1, 00:16:37.575 "base_bdevs_list": [ 00:16:37.575 { 00:16:37.575 "name": null, 00:16:37.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.575 "is_configured": false, 00:16:37.575 "data_offset": 0, 00:16:37.575 "data_size": 63488 00:16:37.575 }, 00:16:37.575 { 00:16:37.575 "name": "BaseBdev2", 00:16:37.575 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:37.575 "is_configured": true, 00:16:37.575 "data_offset": 2048, 00:16:37.575 "data_size": 63488 00:16:37.575 } 00:16:37.575 ] 00:16:37.576 }' 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.576 [2024-11-26 06:26:21.608126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:37.576 [2024-11-26 06:26:21.608206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.576 [2024-11-26 06:26:21.608246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:37.576 [2024-11-26 06:26:21.608259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.576 [2024-11-26 06:26:21.608770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.576 [2024-11-26 06:26:21.608793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:37.576 [2024-11-26 06:26:21.608879] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:37.576 [2024-11-26 06:26:21.608901] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.576 [2024-11-26 06:26:21.608909] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:37.576 [2024-11-26 06:26:21.608923] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:37.576 BaseBdev1 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.576 06:26:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.515 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.516 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.516 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.775 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.775 "name": "raid_bdev1", 00:16:38.775 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:38.775 "strip_size_kb": 0, 00:16:38.775 "state": "online", 00:16:38.775 "raid_level": "raid1", 00:16:38.775 "superblock": true, 00:16:38.775 "num_base_bdevs": 2, 00:16:38.775 "num_base_bdevs_discovered": 1, 00:16:38.775 "num_base_bdevs_operational": 1, 00:16:38.775 "base_bdevs_list": [ 00:16:38.775 { 00:16:38.775 "name": null, 00:16:38.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.775 "is_configured": false, 00:16:38.775 "data_offset": 0, 00:16:38.775 "data_size": 63488 00:16:38.775 }, 00:16:38.775 { 00:16:38.775 "name": "BaseBdev2", 00:16:38.775 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:38.775 "is_configured": true, 00:16:38.775 "data_offset": 2048, 00:16:38.775 "data_size": 63488 00:16:38.775 } 00:16:38.775 ] 00:16:38.775 }' 00:16:38.775 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.775 06:26:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.035 "name": "raid_bdev1", 00:16:39.035 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:39.035 "strip_size_kb": 0, 00:16:39.035 "state": "online", 00:16:39.035 "raid_level": "raid1", 00:16:39.035 "superblock": true, 00:16:39.035 "num_base_bdevs": 2, 00:16:39.035 "num_base_bdevs_discovered": 1, 00:16:39.035 "num_base_bdevs_operational": 1, 00:16:39.035 "base_bdevs_list": [ 00:16:39.035 { 00:16:39.035 "name": null, 00:16:39.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.035 "is_configured": false, 00:16:39.035 "data_offset": 0, 00:16:39.035 "data_size": 63488 00:16:39.035 }, 00:16:39.035 { 00:16:39.035 "name": "BaseBdev2", 00:16:39.035 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:39.035 "is_configured": true, 00:16:39.035 "data_offset": 2048, 00:16:39.035 "data_size": 63488 00:16:39.035 } 00:16:39.035 ] 00:16:39.035 }' 00:16:39.035 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.295 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:39.295 [2024-11-26 06:26:23.265631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.296 [2024-11-26 06:26:23.265898] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.296 [2024-11-26 06:26:23.265955] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.296 request: 00:16:39.296 { 00:16:39.296 "base_bdev": "BaseBdev1", 00:16:39.296 "raid_bdev": "raid_bdev1", 00:16:39.296 "method": "bdev_raid_add_base_bdev", 00:16:39.296 "req_id": 1 00:16:39.296 } 00:16:39.296 Got JSON-RPC error response 00:16:39.296 response: 00:16:39.296 { 00:16:39.296 "code": -22, 00:16:39.296 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:39.296 } 00:16:39.296 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.296 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:39.296 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.296 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.296 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.296 06:26:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.233 "name": "raid_bdev1", 00:16:40.233 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:40.233 "strip_size_kb": 0, 00:16:40.233 "state": "online", 00:16:40.233 "raid_level": "raid1", 00:16:40.233 "superblock": true, 00:16:40.233 "num_base_bdevs": 2, 00:16:40.233 "num_base_bdevs_discovered": 1, 00:16:40.233 "num_base_bdevs_operational": 1, 00:16:40.233 "base_bdevs_list": [ 00:16:40.233 { 00:16:40.233 "name": null, 00:16:40.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.233 "is_configured": false, 00:16:40.233 "data_offset": 0, 00:16:40.233 "data_size": 63488 00:16:40.233 }, 00:16:40.233 { 00:16:40.233 "name": "BaseBdev2", 00:16:40.233 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:40.233 "is_configured": true, 00:16:40.233 "data_offset": 2048, 00:16:40.233 "data_size": 63488 00:16:40.233 } 00:16:40.233 ] 00:16:40.233 }' 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.233 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.802 "name": "raid_bdev1", 00:16:40.802 "uuid": "b5bf29ef-093e-497c-a95e-e63076727b7d", 00:16:40.802 "strip_size_kb": 0, 00:16:40.802 "state": "online", 00:16:40.802 "raid_level": "raid1", 00:16:40.802 "superblock": true, 00:16:40.802 "num_base_bdevs": 2, 00:16:40.802 "num_base_bdevs_discovered": 1, 00:16:40.802 "num_base_bdevs_operational": 1, 00:16:40.802 "base_bdevs_list": [ 00:16:40.802 { 00:16:40.802 "name": null, 00:16:40.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.802 "is_configured": false, 00:16:40.802 "data_offset": 0, 00:16:40.802 "data_size": 63488 00:16:40.802 }, 00:16:40.802 { 00:16:40.802 "name": "BaseBdev2", 00:16:40.802 "uuid": "9598ae21-2597-56f5-bde1-d7d8e7c4c8bf", 00:16:40.802 "is_configured": true, 00:16:40.802 "data_offset": 2048, 00:16:40.802 "data_size": 63488 00:16:40.802 } 00:16:40.802 ] 00:16:40.802 }' 00:16:40.802 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77382 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77382 ']' 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77382 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77382 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.803 killing process with pid 77382 00:16:40.803 Received shutdown signal, test time was about 17.439779 seconds 00:16:40.803 00:16:40.803 Latency(us) 00:16:40.803 [2024-11-26T06:26:24.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.803 [2024-11-26T06:26:24.940Z] =================================================================================================================== 00:16:40.803 [2024-11-26T06:26:24.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77382' 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77382 00:16:40.803 [2024-11-26 06:26:24.888774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.803 06:26:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77382 00:16:40.803 [2024-11-26 06:26:24.888923] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.803 [2024-11-26 06:26:24.888987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.803 [2024-11-26 06:26:24.888998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:41.062 [2024-11-26 06:26:25.143642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:42.486 00:16:42.486 real 0m20.750s 00:16:42.486 user 0m27.087s 00:16:42.486 sys 0m2.384s 00:16:42.486 ************************************ 00:16:42.486 END TEST raid_rebuild_test_sb_io 00:16:42.486 ************************************ 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.486 06:26:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:42.486 06:26:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:42.486 06:26:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:42.486 06:26:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.486 06:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.486 ************************************ 00:16:42.486 START TEST raid_rebuild_test 00:16:42.486 ************************************ 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78082 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78082 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78082 ']' 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.486 06:26:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.486 [2024-11-26 06:26:26.564004] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:16:42.486 [2024-11-26 06:26:26.564264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:42.486 Zero copy mechanism will not be used. 00:16:42.486 -allocations --file-prefix=spdk_pid78082 ] 00:16:42.746 [2024-11-26 06:26:26.738407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.746 [2024-11-26 06:26:26.859748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.006 [2024-11-26 06:26:27.073327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.006 [2024-11-26 06:26:27.073406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 BaseBdev1_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 [2024-11-26 06:26:27.465810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.576 [2024-11-26 06:26:27.465967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.576 [2024-11-26 06:26:27.466008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:43.576 [2024-11-26 06:26:27.466068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.576 [2024-11-26 06:26:27.468164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.576 [2024-11-26 06:26:27.468239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.576 BaseBdev1 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 BaseBdev2_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 [2024-11-26 06:26:27.524117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:43.576 [2024-11-26 06:26:27.524267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.576 [2024-11-26 06:26:27.524303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:43.576 [2024-11-26 06:26:27.524334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.576 [2024-11-26 06:26:27.526424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.576 [2024-11-26 06:26:27.526500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:43.576 BaseBdev2 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 BaseBdev3_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 [2024-11-26 06:26:27.588482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:43.576 [2024-11-26 06:26:27.588644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.576 [2024-11-26 06:26:27.588683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:43.576 [2024-11-26 06:26:27.588715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.576 [2024-11-26 06:26:27.590782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.576 [2024-11-26 06:26:27.590868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:43.576 BaseBdev3 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 BaseBdev4_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 [2024-11-26 06:26:27.644429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:43.576 [2024-11-26 06:26:27.644596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.576 [2024-11-26 06:26:27.644634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:43.576 [2024-11-26 06:26:27.644667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.576 [2024-11-26 06:26:27.646882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.576 [2024-11-26 06:26:27.646925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:43.576 BaseBdev4 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 spare_malloc 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.576 spare_delay 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.576 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.835 [2024-11-26 06:26:27.712259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.835 [2024-11-26 06:26:27.712434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.835 [2024-11-26 06:26:27.712479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:43.835 [2024-11-26 06:26:27.712527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.835 [2024-11-26 06:26:27.714713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.835 [2024-11-26 06:26:27.714786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.835 spare 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.836 [2024-11-26 06:26:27.724273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.836 [2024-11-26 06:26:27.726042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.836 [2024-11-26 06:26:27.726118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.836 [2024-11-26 06:26:27.726169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:43.836 [2024-11-26 06:26:27.726241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:43.836 [2024-11-26 06:26:27.726254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:43.836 [2024-11-26 06:26:27.726508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:43.836 [2024-11-26 06:26:27.726666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:43.836 [2024-11-26 06:26:27.726678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:43.836 [2024-11-26 06:26:27.726820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.836 "name": "raid_bdev1", 00:16:43.836 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:43.836 "strip_size_kb": 0, 00:16:43.836 "state": "online", 00:16:43.836 "raid_level": "raid1", 00:16:43.836 "superblock": false, 00:16:43.836 "num_base_bdevs": 4, 00:16:43.836 "num_base_bdevs_discovered": 4, 00:16:43.836 "num_base_bdevs_operational": 4, 00:16:43.836 "base_bdevs_list": [ 00:16:43.836 { 00:16:43.836 "name": "BaseBdev1", 00:16:43.836 "uuid": "d939f279-152f-5bb2-9745-76ca707a0999", 00:16:43.836 "is_configured": true, 00:16:43.836 "data_offset": 0, 00:16:43.836 "data_size": 65536 00:16:43.836 }, 00:16:43.836 { 00:16:43.836 "name": "BaseBdev2", 00:16:43.836 "uuid": "66c24bd6-1078-5ade-a716-703f876bedb7", 00:16:43.836 "is_configured": true, 00:16:43.836 "data_offset": 0, 00:16:43.836 "data_size": 65536 00:16:43.836 }, 00:16:43.836 { 00:16:43.836 "name": "BaseBdev3", 00:16:43.836 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:43.836 "is_configured": true, 00:16:43.836 "data_offset": 0, 00:16:43.836 "data_size": 65536 00:16:43.836 }, 00:16:43.836 { 00:16:43.836 "name": "BaseBdev4", 00:16:43.836 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:43.836 "is_configured": true, 00:16:43.836 "data_offset": 0, 00:16:43.836 "data_size": 65536 00:16:43.836 } 00:16:43.836 ] 00:16:43.836 }' 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.836 06:26:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.094 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.094 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.094 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.094 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:44.094 [2024-11-26 06:26:28.195974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.094 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:44.353 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:44.354 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:44.354 [2024-11-26 06:26:28.483105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:44.612 /dev/nbd0 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.612 1+0 records in 00:16:44.612 1+0 records out 00:16:44.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590823 s, 6.9 MB/s 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:44.612 06:26:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:51.179 65536+0 records in 00:16:51.179 65536+0 records out 00:16:51.179 33554432 bytes (34 MB, 32 MiB) copied, 6.39575 s, 5.2 MB/s 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:51.179 06:26:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:51.179 [2024-11-26 06:26:35.165328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.179 [2024-11-26 06:26:35.185680] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.179 "name": "raid_bdev1", 00:16:51.179 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:51.179 "strip_size_kb": 0, 00:16:51.179 "state": "online", 00:16:51.179 "raid_level": "raid1", 00:16:51.179 "superblock": false, 00:16:51.179 "num_base_bdevs": 4, 00:16:51.179 "num_base_bdevs_discovered": 3, 00:16:51.179 "num_base_bdevs_operational": 3, 00:16:51.179 "base_bdevs_list": [ 00:16:51.179 { 00:16:51.179 "name": null, 00:16:51.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.179 "is_configured": false, 00:16:51.179 "data_offset": 0, 00:16:51.179 "data_size": 65536 00:16:51.179 }, 00:16:51.179 { 00:16:51.179 "name": "BaseBdev2", 00:16:51.179 "uuid": "66c24bd6-1078-5ade-a716-703f876bedb7", 00:16:51.179 "is_configured": true, 00:16:51.179 "data_offset": 0, 00:16:51.179 "data_size": 65536 00:16:51.179 }, 00:16:51.179 { 00:16:51.179 "name": "BaseBdev3", 00:16:51.179 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:51.179 "is_configured": true, 00:16:51.179 "data_offset": 0, 00:16:51.179 "data_size": 65536 00:16:51.179 }, 00:16:51.179 { 00:16:51.179 "name": "BaseBdev4", 00:16:51.179 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:51.179 "is_configured": true, 00:16:51.179 "data_offset": 0, 00:16:51.179 "data_size": 65536 00:16:51.179 } 00:16:51.179 ] 00:16:51.179 }' 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.179 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.748 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.748 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.748 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.748 [2024-11-26 06:26:35.652880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.748 [2024-11-26 06:26:35.668944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:51.748 06:26:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.748 06:26:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:51.748 [2024-11-26 06:26:35.671177] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.688 "name": "raid_bdev1", 00:16:52.688 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:52.688 "strip_size_kb": 0, 00:16:52.688 "state": "online", 00:16:52.688 "raid_level": "raid1", 00:16:52.688 "superblock": false, 00:16:52.688 "num_base_bdevs": 4, 00:16:52.688 "num_base_bdevs_discovered": 4, 00:16:52.688 "num_base_bdevs_operational": 4, 00:16:52.688 "process": { 00:16:52.688 "type": "rebuild", 00:16:52.688 "target": "spare", 00:16:52.688 "progress": { 00:16:52.688 "blocks": 20480, 00:16:52.688 "percent": 31 00:16:52.688 } 00:16:52.688 }, 00:16:52.688 "base_bdevs_list": [ 00:16:52.688 { 00:16:52.688 "name": "spare", 00:16:52.688 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:52.688 "is_configured": true, 00:16:52.688 "data_offset": 0, 00:16:52.688 "data_size": 65536 00:16:52.688 }, 00:16:52.688 { 00:16:52.688 "name": "BaseBdev2", 00:16:52.688 "uuid": "66c24bd6-1078-5ade-a716-703f876bedb7", 00:16:52.688 "is_configured": true, 00:16:52.688 "data_offset": 0, 00:16:52.688 "data_size": 65536 00:16:52.688 }, 00:16:52.688 { 00:16:52.688 "name": "BaseBdev3", 00:16:52.688 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:52.688 "is_configured": true, 00:16:52.688 "data_offset": 0, 00:16:52.688 "data_size": 65536 00:16:52.688 }, 00:16:52.688 { 00:16:52.688 "name": "BaseBdev4", 00:16:52.688 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:52.688 "is_configured": true, 00:16:52.688 "data_offset": 0, 00:16:52.688 "data_size": 65536 00:16:52.688 } 00:16:52.688 ] 00:16:52.688 }' 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.688 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.688 [2024-11-26 06:26:36.818377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.948 [2024-11-26 06:26:36.877465] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.948 [2024-11-26 06:26:36.877678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.948 [2024-11-26 06:26:36.877701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.948 [2024-11-26 06:26:36.877712] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.948 "name": "raid_bdev1", 00:16:52.948 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:52.948 "strip_size_kb": 0, 00:16:52.948 "state": "online", 00:16:52.948 "raid_level": "raid1", 00:16:52.948 "superblock": false, 00:16:52.948 "num_base_bdevs": 4, 00:16:52.948 "num_base_bdevs_discovered": 3, 00:16:52.948 "num_base_bdevs_operational": 3, 00:16:52.948 "base_bdevs_list": [ 00:16:52.948 { 00:16:52.948 "name": null, 00:16:52.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.948 "is_configured": false, 00:16:52.948 "data_offset": 0, 00:16:52.948 "data_size": 65536 00:16:52.948 }, 00:16:52.948 { 00:16:52.948 "name": "BaseBdev2", 00:16:52.948 "uuid": "66c24bd6-1078-5ade-a716-703f876bedb7", 00:16:52.948 "is_configured": true, 00:16:52.948 "data_offset": 0, 00:16:52.948 "data_size": 65536 00:16:52.948 }, 00:16:52.948 { 00:16:52.948 "name": "BaseBdev3", 00:16:52.948 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:52.948 "is_configured": true, 00:16:52.948 "data_offset": 0, 00:16:52.948 "data_size": 65536 00:16:52.948 }, 00:16:52.948 { 00:16:52.948 "name": "BaseBdev4", 00:16:52.948 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:52.948 "is_configured": true, 00:16:52.948 "data_offset": 0, 00:16:52.948 "data_size": 65536 00:16:52.948 } 00:16:52.948 ] 00:16:52.948 }' 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.948 06:26:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.208 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.208 "name": "raid_bdev1", 00:16:53.208 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:53.208 "strip_size_kb": 0, 00:16:53.208 "state": "online", 00:16:53.208 "raid_level": "raid1", 00:16:53.208 "superblock": false, 00:16:53.208 "num_base_bdevs": 4, 00:16:53.208 "num_base_bdevs_discovered": 3, 00:16:53.208 "num_base_bdevs_operational": 3, 00:16:53.208 "base_bdevs_list": [ 00:16:53.208 { 00:16:53.208 "name": null, 00:16:53.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.208 "is_configured": false, 00:16:53.208 "data_offset": 0, 00:16:53.208 "data_size": 65536 00:16:53.208 }, 00:16:53.208 { 00:16:53.208 "name": "BaseBdev2", 00:16:53.208 "uuid": "66c24bd6-1078-5ade-a716-703f876bedb7", 00:16:53.208 "is_configured": true, 00:16:53.208 "data_offset": 0, 00:16:53.208 "data_size": 65536 00:16:53.208 }, 00:16:53.208 { 00:16:53.208 "name": "BaseBdev3", 00:16:53.208 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:53.208 "is_configured": true, 00:16:53.208 "data_offset": 0, 00:16:53.208 "data_size": 65536 00:16:53.208 }, 00:16:53.208 { 00:16:53.208 "name": "BaseBdev4", 00:16:53.208 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:53.208 "is_configured": true, 00:16:53.208 "data_offset": 0, 00:16:53.208 "data_size": 65536 00:16:53.208 } 00:16:53.208 ] 00:16:53.208 }' 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.467 [2024-11-26 06:26:37.429467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.467 [2024-11-26 06:26:37.445526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:53.467 06:26:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.468 06:26:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:53.468 [2024-11-26 06:26:37.447808] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.404 "name": "raid_bdev1", 00:16:54.404 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:54.404 "strip_size_kb": 0, 00:16:54.404 "state": "online", 00:16:54.404 "raid_level": "raid1", 00:16:54.404 "superblock": false, 00:16:54.404 "num_base_bdevs": 4, 00:16:54.404 "num_base_bdevs_discovered": 4, 00:16:54.404 "num_base_bdevs_operational": 4, 00:16:54.404 "process": { 00:16:54.404 "type": "rebuild", 00:16:54.404 "target": "spare", 00:16:54.404 "progress": { 00:16:54.404 "blocks": 20480, 00:16:54.404 "percent": 31 00:16:54.404 } 00:16:54.404 }, 00:16:54.404 "base_bdevs_list": [ 00:16:54.404 { 00:16:54.404 "name": "spare", 00:16:54.404 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 0, 00:16:54.404 "data_size": 65536 00:16:54.404 }, 00:16:54.404 { 00:16:54.404 "name": "BaseBdev2", 00:16:54.404 "uuid": "66c24bd6-1078-5ade-a716-703f876bedb7", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 0, 00:16:54.404 "data_size": 65536 00:16:54.404 }, 00:16:54.404 { 00:16:54.404 "name": "BaseBdev3", 00:16:54.404 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 0, 00:16:54.404 "data_size": 65536 00:16:54.404 }, 00:16:54.404 { 00:16:54.404 "name": "BaseBdev4", 00:16:54.404 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 0, 00:16:54.404 "data_size": 65536 00:16:54.404 } 00:16:54.404 ] 00:16:54.404 }' 00:16:54.404 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.664 [2024-11-26 06:26:38.598893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.664 [2024-11-26 06:26:38.653825] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.664 "name": "raid_bdev1", 00:16:54.664 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:54.664 "strip_size_kb": 0, 00:16:54.664 "state": "online", 00:16:54.664 "raid_level": "raid1", 00:16:54.664 "superblock": false, 00:16:54.664 "num_base_bdevs": 4, 00:16:54.664 "num_base_bdevs_discovered": 3, 00:16:54.664 "num_base_bdevs_operational": 3, 00:16:54.664 "process": { 00:16:54.664 "type": "rebuild", 00:16:54.664 "target": "spare", 00:16:54.664 "progress": { 00:16:54.664 "blocks": 24576, 00:16:54.664 "percent": 37 00:16:54.664 } 00:16:54.664 }, 00:16:54.664 "base_bdevs_list": [ 00:16:54.664 { 00:16:54.664 "name": "spare", 00:16:54.664 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:54.664 "is_configured": true, 00:16:54.664 "data_offset": 0, 00:16:54.664 "data_size": 65536 00:16:54.664 }, 00:16:54.664 { 00:16:54.664 "name": null, 00:16:54.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.664 "is_configured": false, 00:16:54.664 "data_offset": 0, 00:16:54.664 "data_size": 65536 00:16:54.664 }, 00:16:54.664 { 00:16:54.664 "name": "BaseBdev3", 00:16:54.664 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:54.664 "is_configured": true, 00:16:54.664 "data_offset": 0, 00:16:54.664 "data_size": 65536 00:16:54.664 }, 00:16:54.664 { 00:16:54.664 "name": "BaseBdev4", 00:16:54.664 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:54.664 "is_configured": true, 00:16:54.664 "data_offset": 0, 00:16:54.664 "data_size": 65536 00:16:54.664 } 00:16:54.664 ] 00:16:54.664 }' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.664 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=471 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.665 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.924 "name": "raid_bdev1", 00:16:54.924 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:54.924 "strip_size_kb": 0, 00:16:54.924 "state": "online", 00:16:54.924 "raid_level": "raid1", 00:16:54.924 "superblock": false, 00:16:54.924 "num_base_bdevs": 4, 00:16:54.924 "num_base_bdevs_discovered": 3, 00:16:54.924 "num_base_bdevs_operational": 3, 00:16:54.924 "process": { 00:16:54.924 "type": "rebuild", 00:16:54.924 "target": "spare", 00:16:54.924 "progress": { 00:16:54.924 "blocks": 26624, 00:16:54.924 "percent": 40 00:16:54.924 } 00:16:54.924 }, 00:16:54.924 "base_bdevs_list": [ 00:16:54.924 { 00:16:54.924 "name": "spare", 00:16:54.924 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:54.924 "is_configured": true, 00:16:54.924 "data_offset": 0, 00:16:54.924 "data_size": 65536 00:16:54.924 }, 00:16:54.924 { 00:16:54.924 "name": null, 00:16:54.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.924 "is_configured": false, 00:16:54.924 "data_offset": 0, 00:16:54.924 "data_size": 65536 00:16:54.924 }, 00:16:54.924 { 00:16:54.924 "name": "BaseBdev3", 00:16:54.924 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:54.924 "is_configured": true, 00:16:54.924 "data_offset": 0, 00:16:54.924 "data_size": 65536 00:16:54.924 }, 00:16:54.924 { 00:16:54.924 "name": "BaseBdev4", 00:16:54.924 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:54.924 "is_configured": true, 00:16:54.924 "data_offset": 0, 00:16:54.924 "data_size": 65536 00:16:54.924 } 00:16:54.924 ] 00:16:54.924 }' 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.924 06:26:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.879 "name": "raid_bdev1", 00:16:55.879 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:55.879 "strip_size_kb": 0, 00:16:55.879 "state": "online", 00:16:55.879 "raid_level": "raid1", 00:16:55.879 "superblock": false, 00:16:55.879 "num_base_bdevs": 4, 00:16:55.879 "num_base_bdevs_discovered": 3, 00:16:55.879 "num_base_bdevs_operational": 3, 00:16:55.879 "process": { 00:16:55.879 "type": "rebuild", 00:16:55.879 "target": "spare", 00:16:55.879 "progress": { 00:16:55.879 "blocks": 49152, 00:16:55.879 "percent": 75 00:16:55.879 } 00:16:55.879 }, 00:16:55.879 "base_bdevs_list": [ 00:16:55.879 { 00:16:55.879 "name": "spare", 00:16:55.879 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:55.879 "is_configured": true, 00:16:55.879 "data_offset": 0, 00:16:55.879 "data_size": 65536 00:16:55.879 }, 00:16:55.879 { 00:16:55.879 "name": null, 00:16:55.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.879 "is_configured": false, 00:16:55.879 "data_offset": 0, 00:16:55.879 "data_size": 65536 00:16:55.879 }, 00:16:55.879 { 00:16:55.879 "name": "BaseBdev3", 00:16:55.879 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:55.879 "is_configured": true, 00:16:55.879 "data_offset": 0, 00:16:55.879 "data_size": 65536 00:16:55.879 }, 00:16:55.879 { 00:16:55.879 "name": "BaseBdev4", 00:16:55.879 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:55.879 "is_configured": true, 00:16:55.879 "data_offset": 0, 00:16:55.879 "data_size": 65536 00:16:55.879 } 00:16:55.879 ] 00:16:55.879 }' 00:16:55.879 06:26:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.142 06:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.142 06:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.142 06:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.142 06:26:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.712 [2024-11-26 06:26:40.663886] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:56.712 [2024-11-26 06:26:40.664001] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:56.712 [2024-11-26 06:26:40.664063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.973 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.232 "name": "raid_bdev1", 00:16:57.232 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:57.232 "strip_size_kb": 0, 00:16:57.232 "state": "online", 00:16:57.232 "raid_level": "raid1", 00:16:57.232 "superblock": false, 00:16:57.232 "num_base_bdevs": 4, 00:16:57.232 "num_base_bdevs_discovered": 3, 00:16:57.232 "num_base_bdevs_operational": 3, 00:16:57.232 "base_bdevs_list": [ 00:16:57.232 { 00:16:57.232 "name": "spare", 00:16:57.232 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:57.232 "is_configured": true, 00:16:57.232 "data_offset": 0, 00:16:57.232 "data_size": 65536 00:16:57.232 }, 00:16:57.232 { 00:16:57.232 "name": null, 00:16:57.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.232 "is_configured": false, 00:16:57.232 "data_offset": 0, 00:16:57.232 "data_size": 65536 00:16:57.232 }, 00:16:57.232 { 00:16:57.232 "name": "BaseBdev3", 00:16:57.232 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:57.232 "is_configured": true, 00:16:57.232 "data_offset": 0, 00:16:57.232 "data_size": 65536 00:16:57.232 }, 00:16:57.232 { 00:16:57.232 "name": "BaseBdev4", 00:16:57.232 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:57.232 "is_configured": true, 00:16:57.232 "data_offset": 0, 00:16:57.232 "data_size": 65536 00:16:57.232 } 00:16:57.232 ] 00:16:57.232 }' 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.232 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.233 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.233 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.233 "name": "raid_bdev1", 00:16:57.233 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:57.233 "strip_size_kb": 0, 00:16:57.233 "state": "online", 00:16:57.233 "raid_level": "raid1", 00:16:57.233 "superblock": false, 00:16:57.233 "num_base_bdevs": 4, 00:16:57.233 "num_base_bdevs_discovered": 3, 00:16:57.233 "num_base_bdevs_operational": 3, 00:16:57.233 "base_bdevs_list": [ 00:16:57.233 { 00:16:57.233 "name": "spare", 00:16:57.233 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:57.233 "is_configured": true, 00:16:57.233 "data_offset": 0, 00:16:57.233 "data_size": 65536 00:16:57.233 }, 00:16:57.233 { 00:16:57.233 "name": null, 00:16:57.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.233 "is_configured": false, 00:16:57.233 "data_offset": 0, 00:16:57.233 "data_size": 65536 00:16:57.233 }, 00:16:57.233 { 00:16:57.233 "name": "BaseBdev3", 00:16:57.233 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:57.233 "is_configured": true, 00:16:57.233 "data_offset": 0, 00:16:57.233 "data_size": 65536 00:16:57.233 }, 00:16:57.233 { 00:16:57.233 "name": "BaseBdev4", 00:16:57.233 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:57.233 "is_configured": true, 00:16:57.233 "data_offset": 0, 00:16:57.233 "data_size": 65536 00:16:57.233 } 00:16:57.233 ] 00:16:57.233 }' 00:16:57.233 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.233 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.233 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.493 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.493 "name": "raid_bdev1", 00:16:57.493 "uuid": "26fb3c0c-9677-4268-8e78-cc99f5592b52", 00:16:57.493 "strip_size_kb": 0, 00:16:57.493 "state": "online", 00:16:57.493 "raid_level": "raid1", 00:16:57.493 "superblock": false, 00:16:57.493 "num_base_bdevs": 4, 00:16:57.493 "num_base_bdevs_discovered": 3, 00:16:57.493 "num_base_bdevs_operational": 3, 00:16:57.493 "base_bdevs_list": [ 00:16:57.493 { 00:16:57.493 "name": "spare", 00:16:57.494 "uuid": "14eb1fd5-0ac2-556b-8d1b-724062082214", 00:16:57.494 "is_configured": true, 00:16:57.494 "data_offset": 0, 00:16:57.494 "data_size": 65536 00:16:57.494 }, 00:16:57.494 { 00:16:57.494 "name": null, 00:16:57.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.494 "is_configured": false, 00:16:57.494 "data_offset": 0, 00:16:57.494 "data_size": 65536 00:16:57.494 }, 00:16:57.494 { 00:16:57.494 "name": "BaseBdev3", 00:16:57.494 "uuid": "edcbd689-b0fb-50af-9aa0-9d45f8a81b6f", 00:16:57.494 "is_configured": true, 00:16:57.494 "data_offset": 0, 00:16:57.494 "data_size": 65536 00:16:57.494 }, 00:16:57.494 { 00:16:57.494 "name": "BaseBdev4", 00:16:57.494 "uuid": "0df79517-4952-583b-a3b6-76edc4c4c5ca", 00:16:57.494 "is_configured": true, 00:16:57.494 "data_offset": 0, 00:16:57.494 "data_size": 65536 00:16:57.494 } 00:16:57.494 ] 00:16:57.494 }' 00:16:57.494 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.494 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.753 [2024-11-26 06:26:41.832916] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.753 [2024-11-26 06:26:41.832964] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.753 [2024-11-26 06:26:41.833073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.753 [2024-11-26 06:26:41.833170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.753 [2024-11-26 06:26:41.833182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.753 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:58.013 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:58.013 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.013 06:26:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:58.013 /dev/nbd0 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.013 1+0 records in 00:16:58.013 1+0 records out 00:16:58.013 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495337 s, 8.3 MB/s 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.013 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:58.272 /dev/nbd1 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:58.272 1+0 records in 00:16:58.272 1+0 records out 00:16:58.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407906 s, 10.0 MB/s 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:58.272 06:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.531 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.790 06:26:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78082 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78082 ']' 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78082 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78082 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.050 killing process with pid 78082 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78082' 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78082 00:16:59.050 Received shutdown signal, test time was about 60.000000 seconds 00:16:59.050 00:16:59.050 Latency(us) 00:16:59.050 [2024-11-26T06:26:43.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.050 [2024-11-26T06:26:43.187Z] =================================================================================================================== 00:16:59.050 [2024-11-26T06:26:43.187Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:59.050 [2024-11-26 06:26:43.082929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:59.050 06:26:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78082 00:16:59.618 [2024-11-26 06:26:43.591596] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:00.997 00:17:00.997 real 0m18.257s 00:17:00.997 user 0m19.972s 00:17:00.997 sys 0m3.735s 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.997 ************************************ 00:17:00.997 END TEST raid_rebuild_test 00:17:00.997 ************************************ 00:17:00.997 06:26:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:00.997 06:26:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:00.997 06:26:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.997 06:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:00.997 ************************************ 00:17:00.997 START TEST raid_rebuild_test_sb 00:17:00.997 ************************************ 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78529 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78529 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78529 ']' 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.997 06:26:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.997 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:00.997 Zero copy mechanism will not be used. 00:17:00.997 [2024-11-26 06:26:44.896158] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:17:00.997 [2024-11-26 06:26:44.896292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78529 ] 00:17:00.997 [2024-11-26 06:26:45.056874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.257 [2024-11-26 06:26:45.173456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.257 [2024-11-26 06:26:45.374258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.257 [2024-11-26 06:26:45.374299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.825 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 BaseBdev1_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 [2024-11-26 06:26:45.787774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:01.826 [2024-11-26 06:26:45.787857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.826 [2024-11-26 06:26:45.787880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:01.826 [2024-11-26 06:26:45.787891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.826 [2024-11-26 06:26:45.789954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.826 [2024-11-26 06:26:45.789995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:01.826 BaseBdev1 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 BaseBdev2_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 [2024-11-26 06:26:45.843214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:01.826 [2024-11-26 06:26:45.843292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.826 [2024-11-26 06:26:45.843312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:01.826 [2024-11-26 06:26:45.843325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.826 [2024-11-26 06:26:45.845490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.826 [2024-11-26 06:26:45.845534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:01.826 BaseBdev2 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 BaseBdev3_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.826 [2024-11-26 06:26:45.912865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:01.826 [2024-11-26 06:26:45.912931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.826 [2024-11-26 06:26:45.912968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:01.826 [2024-11-26 06:26:45.912979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.826 [2024-11-26 06:26:45.915128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.826 [2024-11-26 06:26:45.915166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:01.826 BaseBdev3 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.826 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 BaseBdev4_malloc 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 [2024-11-26 06:26:45.969252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:02.086 [2024-11-26 06:26:45.969327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.086 [2024-11-26 06:26:45.969349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:02.086 [2024-11-26 06:26:45.969361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.086 [2024-11-26 06:26:45.971390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.086 [2024-11-26 06:26:45.971428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:02.086 BaseBdev4 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.086 06:26:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 spare_malloc 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 spare_delay 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 [2024-11-26 06:26:46.038226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.086 [2024-11-26 06:26:46.038301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.086 [2024-11-26 06:26:46.038326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:02.086 [2024-11-26 06:26:46.038337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.086 [2024-11-26 06:26:46.040423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.086 [2024-11-26 06:26:46.040462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.086 spare 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 [2024-11-26 06:26:46.050258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.086 [2024-11-26 06:26:46.052068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.086 [2024-11-26 06:26:46.052139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.086 [2024-11-26 06:26:46.052193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:02.086 [2024-11-26 06:26:46.052377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:02.086 [2024-11-26 06:26:46.052411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.086 [2024-11-26 06:26:46.052677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:02.086 [2024-11-26 06:26:46.052882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:02.086 [2024-11-26 06:26:46.052901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:02.086 [2024-11-26 06:26:46.053082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.086 "name": "raid_bdev1", 00:17:02.086 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:02.086 "strip_size_kb": 0, 00:17:02.086 "state": "online", 00:17:02.086 "raid_level": "raid1", 00:17:02.086 "superblock": true, 00:17:02.086 "num_base_bdevs": 4, 00:17:02.086 "num_base_bdevs_discovered": 4, 00:17:02.086 "num_base_bdevs_operational": 4, 00:17:02.086 "base_bdevs_list": [ 00:17:02.086 { 00:17:02.086 "name": "BaseBdev1", 00:17:02.086 "uuid": "5eda8e91-e9f5-5279-baed-e675134bced2", 00:17:02.086 "is_configured": true, 00:17:02.086 "data_offset": 2048, 00:17:02.086 "data_size": 63488 00:17:02.086 }, 00:17:02.086 { 00:17:02.086 "name": "BaseBdev2", 00:17:02.086 "uuid": "5e021cd4-dc5e-53c3-9e32-9687812ccb38", 00:17:02.086 "is_configured": true, 00:17:02.086 "data_offset": 2048, 00:17:02.086 "data_size": 63488 00:17:02.086 }, 00:17:02.086 { 00:17:02.086 "name": "BaseBdev3", 00:17:02.086 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:02.086 "is_configured": true, 00:17:02.086 "data_offset": 2048, 00:17:02.086 "data_size": 63488 00:17:02.086 }, 00:17:02.086 { 00:17:02.086 "name": "BaseBdev4", 00:17:02.086 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:02.086 "is_configured": true, 00:17:02.086 "data_offset": 2048, 00:17:02.086 "data_size": 63488 00:17:02.086 } 00:17:02.086 ] 00:17:02.086 }' 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.086 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.655 [2024-11-26 06:26:46.497953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:02.655 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:02.655 [2024-11-26 06:26:46.777189] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:02.914 /dev/nbd0 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.914 1+0 records in 00:17:02.914 1+0 records out 00:17:02.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044153 s, 9.3 MB/s 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:02.914 06:26:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:09.483 63488+0 records in 00:17:09.483 63488+0 records out 00:17:09.483 32505856 bytes (33 MB, 31 MiB) copied, 5.99438 s, 5.4 MB/s 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:09.483 06:26:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:09.483 [2024-11-26 06:26:53.054933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.483 [2024-11-26 06:26:53.094971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.483 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.484 "name": "raid_bdev1", 00:17:09.484 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:09.484 "strip_size_kb": 0, 00:17:09.484 "state": "online", 00:17:09.484 "raid_level": "raid1", 00:17:09.484 "superblock": true, 00:17:09.484 "num_base_bdevs": 4, 00:17:09.484 "num_base_bdevs_discovered": 3, 00:17:09.484 "num_base_bdevs_operational": 3, 00:17:09.484 "base_bdevs_list": [ 00:17:09.484 { 00:17:09.484 "name": null, 00:17:09.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.484 "is_configured": false, 00:17:09.484 "data_offset": 0, 00:17:09.484 "data_size": 63488 00:17:09.484 }, 00:17:09.484 { 00:17:09.484 "name": "BaseBdev2", 00:17:09.484 "uuid": "5e021cd4-dc5e-53c3-9e32-9687812ccb38", 00:17:09.484 "is_configured": true, 00:17:09.484 "data_offset": 2048, 00:17:09.484 "data_size": 63488 00:17:09.484 }, 00:17:09.484 { 00:17:09.484 "name": "BaseBdev3", 00:17:09.484 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:09.484 "is_configured": true, 00:17:09.484 "data_offset": 2048, 00:17:09.484 "data_size": 63488 00:17:09.484 }, 00:17:09.484 { 00:17:09.484 "name": "BaseBdev4", 00:17:09.484 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:09.484 "is_configured": true, 00:17:09.484 "data_offset": 2048, 00:17:09.484 "data_size": 63488 00:17:09.484 } 00:17:09.484 ] 00:17:09.484 }' 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.484 [2024-11-26 06:26:53.558213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.484 [2024-11-26 06:26:53.574272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.484 06:26:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:09.484 [2024-11-26 06:26:53.576403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.858 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.859 "name": "raid_bdev1", 00:17:10.859 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:10.859 "strip_size_kb": 0, 00:17:10.859 "state": "online", 00:17:10.859 "raid_level": "raid1", 00:17:10.859 "superblock": true, 00:17:10.859 "num_base_bdevs": 4, 00:17:10.859 "num_base_bdevs_discovered": 4, 00:17:10.859 "num_base_bdevs_operational": 4, 00:17:10.859 "process": { 00:17:10.859 "type": "rebuild", 00:17:10.859 "target": "spare", 00:17:10.859 "progress": { 00:17:10.859 "blocks": 20480, 00:17:10.859 "percent": 32 00:17:10.859 } 00:17:10.859 }, 00:17:10.859 "base_bdevs_list": [ 00:17:10.859 { 00:17:10.859 "name": "spare", 00:17:10.859 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 }, 00:17:10.859 { 00:17:10.859 "name": "BaseBdev2", 00:17:10.859 "uuid": "5e021cd4-dc5e-53c3-9e32-9687812ccb38", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 }, 00:17:10.859 { 00:17:10.859 "name": "BaseBdev3", 00:17:10.859 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 }, 00:17:10.859 { 00:17:10.859 "name": "BaseBdev4", 00:17:10.859 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 } 00:17:10.859 ] 00:17:10.859 }' 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.859 [2024-11-26 06:26:54.735421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.859 [2024-11-26 06:26:54.782693] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.859 [2024-11-26 06:26:54.782777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.859 [2024-11-26 06:26:54.782795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.859 [2024-11-26 06:26:54.782805] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.859 "name": "raid_bdev1", 00:17:10.859 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:10.859 "strip_size_kb": 0, 00:17:10.859 "state": "online", 00:17:10.859 "raid_level": "raid1", 00:17:10.859 "superblock": true, 00:17:10.859 "num_base_bdevs": 4, 00:17:10.859 "num_base_bdevs_discovered": 3, 00:17:10.859 "num_base_bdevs_operational": 3, 00:17:10.859 "base_bdevs_list": [ 00:17:10.859 { 00:17:10.859 "name": null, 00:17:10.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.859 "is_configured": false, 00:17:10.859 "data_offset": 0, 00:17:10.859 "data_size": 63488 00:17:10.859 }, 00:17:10.859 { 00:17:10.859 "name": "BaseBdev2", 00:17:10.859 "uuid": "5e021cd4-dc5e-53c3-9e32-9687812ccb38", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 }, 00:17:10.859 { 00:17:10.859 "name": "BaseBdev3", 00:17:10.859 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 }, 00:17:10.859 { 00:17:10.859 "name": "BaseBdev4", 00:17:10.859 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:10.859 "is_configured": true, 00:17:10.859 "data_offset": 2048, 00:17:10.859 "data_size": 63488 00:17:10.859 } 00:17:10.859 ] 00:17:10.859 }' 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.859 06:26:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.426 "name": "raid_bdev1", 00:17:11.426 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:11.426 "strip_size_kb": 0, 00:17:11.426 "state": "online", 00:17:11.426 "raid_level": "raid1", 00:17:11.426 "superblock": true, 00:17:11.426 "num_base_bdevs": 4, 00:17:11.426 "num_base_bdevs_discovered": 3, 00:17:11.426 "num_base_bdevs_operational": 3, 00:17:11.426 "base_bdevs_list": [ 00:17:11.426 { 00:17:11.426 "name": null, 00:17:11.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.426 "is_configured": false, 00:17:11.426 "data_offset": 0, 00:17:11.426 "data_size": 63488 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "name": "BaseBdev2", 00:17:11.426 "uuid": "5e021cd4-dc5e-53c3-9e32-9687812ccb38", 00:17:11.426 "is_configured": true, 00:17:11.426 "data_offset": 2048, 00:17:11.426 "data_size": 63488 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "name": "BaseBdev3", 00:17:11.426 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:11.426 "is_configured": true, 00:17:11.426 "data_offset": 2048, 00:17:11.426 "data_size": 63488 00:17:11.426 }, 00:17:11.426 { 00:17:11.426 "name": "BaseBdev4", 00:17:11.426 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:11.426 "is_configured": true, 00:17:11.426 "data_offset": 2048, 00:17:11.426 "data_size": 63488 00:17:11.426 } 00:17:11.426 ] 00:17:11.426 }' 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.426 [2024-11-26 06:26:55.450705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.426 [2024-11-26 06:26:55.467207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.426 06:26:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:11.426 [2024-11-26 06:26:55.469442] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.361 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.620 "name": "raid_bdev1", 00:17:12.620 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:12.620 "strip_size_kb": 0, 00:17:12.620 "state": "online", 00:17:12.620 "raid_level": "raid1", 00:17:12.620 "superblock": true, 00:17:12.620 "num_base_bdevs": 4, 00:17:12.620 "num_base_bdevs_discovered": 4, 00:17:12.620 "num_base_bdevs_operational": 4, 00:17:12.620 "process": { 00:17:12.620 "type": "rebuild", 00:17:12.620 "target": "spare", 00:17:12.620 "progress": { 00:17:12.620 "blocks": 20480, 00:17:12.620 "percent": 32 00:17:12.620 } 00:17:12.620 }, 00:17:12.620 "base_bdevs_list": [ 00:17:12.620 { 00:17:12.620 "name": "spare", 00:17:12.620 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:12.620 "is_configured": true, 00:17:12.620 "data_offset": 2048, 00:17:12.620 "data_size": 63488 00:17:12.620 }, 00:17:12.620 { 00:17:12.620 "name": "BaseBdev2", 00:17:12.620 "uuid": "5e021cd4-dc5e-53c3-9e32-9687812ccb38", 00:17:12.620 "is_configured": true, 00:17:12.620 "data_offset": 2048, 00:17:12.620 "data_size": 63488 00:17:12.620 }, 00:17:12.620 { 00:17:12.620 "name": "BaseBdev3", 00:17:12.620 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:12.620 "is_configured": true, 00:17:12.620 "data_offset": 2048, 00:17:12.620 "data_size": 63488 00:17:12.620 }, 00:17:12.620 { 00:17:12.620 "name": "BaseBdev4", 00:17:12.620 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:12.620 "is_configured": true, 00:17:12.620 "data_offset": 2048, 00:17:12.620 "data_size": 63488 00:17:12.620 } 00:17:12.620 ] 00:17:12.620 }' 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:12.620 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.620 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.620 [2024-11-26 06:26:56.605825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:12.879 [2024-11-26 06:26:56.775510] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.879 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.879 "name": "raid_bdev1", 00:17:12.879 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:12.879 "strip_size_kb": 0, 00:17:12.879 "state": "online", 00:17:12.879 "raid_level": "raid1", 00:17:12.879 "superblock": true, 00:17:12.879 "num_base_bdevs": 4, 00:17:12.879 "num_base_bdevs_discovered": 3, 00:17:12.879 "num_base_bdevs_operational": 3, 00:17:12.879 "process": { 00:17:12.879 "type": "rebuild", 00:17:12.879 "target": "spare", 00:17:12.879 "progress": { 00:17:12.879 "blocks": 24576, 00:17:12.879 "percent": 38 00:17:12.879 } 00:17:12.879 }, 00:17:12.879 "base_bdevs_list": [ 00:17:12.879 { 00:17:12.879 "name": "spare", 00:17:12.879 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:12.879 "is_configured": true, 00:17:12.879 "data_offset": 2048, 00:17:12.879 "data_size": 63488 00:17:12.879 }, 00:17:12.879 { 00:17:12.879 "name": null, 00:17:12.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.879 "is_configured": false, 00:17:12.879 "data_offset": 0, 00:17:12.879 "data_size": 63488 00:17:12.879 }, 00:17:12.879 { 00:17:12.879 "name": "BaseBdev3", 00:17:12.880 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:12.880 "is_configured": true, 00:17:12.880 "data_offset": 2048, 00:17:12.880 "data_size": 63488 00:17:12.880 }, 00:17:12.880 { 00:17:12.880 "name": "BaseBdev4", 00:17:12.880 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:12.880 "is_configured": true, 00:17:12.880 "data_offset": 2048, 00:17:12.880 "data_size": 63488 00:17:12.880 } 00:17:12.880 ] 00:17:12.880 }' 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=489 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.880 "name": "raid_bdev1", 00:17:12.880 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:12.880 "strip_size_kb": 0, 00:17:12.880 "state": "online", 00:17:12.880 "raid_level": "raid1", 00:17:12.880 "superblock": true, 00:17:12.880 "num_base_bdevs": 4, 00:17:12.880 "num_base_bdevs_discovered": 3, 00:17:12.880 "num_base_bdevs_operational": 3, 00:17:12.880 "process": { 00:17:12.880 "type": "rebuild", 00:17:12.880 "target": "spare", 00:17:12.880 "progress": { 00:17:12.880 "blocks": 26624, 00:17:12.880 "percent": 41 00:17:12.880 } 00:17:12.880 }, 00:17:12.880 "base_bdevs_list": [ 00:17:12.880 { 00:17:12.880 "name": "spare", 00:17:12.880 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:12.880 "is_configured": true, 00:17:12.880 "data_offset": 2048, 00:17:12.880 "data_size": 63488 00:17:12.880 }, 00:17:12.880 { 00:17:12.880 "name": null, 00:17:12.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.880 "is_configured": false, 00:17:12.880 "data_offset": 0, 00:17:12.880 "data_size": 63488 00:17:12.880 }, 00:17:12.880 { 00:17:12.880 "name": "BaseBdev3", 00:17:12.880 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:12.880 "is_configured": true, 00:17:12.880 "data_offset": 2048, 00:17:12.880 "data_size": 63488 00:17:12.880 }, 00:17:12.880 { 00:17:12.880 "name": "BaseBdev4", 00:17:12.880 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:12.880 "is_configured": true, 00:17:12.880 "data_offset": 2048, 00:17:12.880 "data_size": 63488 00:17:12.880 } 00:17:12.880 ] 00:17:12.880 }' 00:17:12.880 06:26:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.880 06:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.880 06:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.138 06:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.138 06:26:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.072 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.072 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.073 "name": "raid_bdev1", 00:17:14.073 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:14.073 "strip_size_kb": 0, 00:17:14.073 "state": "online", 00:17:14.073 "raid_level": "raid1", 00:17:14.073 "superblock": true, 00:17:14.073 "num_base_bdevs": 4, 00:17:14.073 "num_base_bdevs_discovered": 3, 00:17:14.073 "num_base_bdevs_operational": 3, 00:17:14.073 "process": { 00:17:14.073 "type": "rebuild", 00:17:14.073 "target": "spare", 00:17:14.073 "progress": { 00:17:14.073 "blocks": 49152, 00:17:14.073 "percent": 77 00:17:14.073 } 00:17:14.073 }, 00:17:14.073 "base_bdevs_list": [ 00:17:14.073 { 00:17:14.073 "name": "spare", 00:17:14.073 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:14.073 "is_configured": true, 00:17:14.073 "data_offset": 2048, 00:17:14.073 "data_size": 63488 00:17:14.073 }, 00:17:14.073 { 00:17:14.073 "name": null, 00:17:14.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.073 "is_configured": false, 00:17:14.073 "data_offset": 0, 00:17:14.073 "data_size": 63488 00:17:14.073 }, 00:17:14.073 { 00:17:14.073 "name": "BaseBdev3", 00:17:14.073 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:14.073 "is_configured": true, 00:17:14.073 "data_offset": 2048, 00:17:14.073 "data_size": 63488 00:17:14.073 }, 00:17:14.073 { 00:17:14.073 "name": "BaseBdev4", 00:17:14.073 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:14.073 "is_configured": true, 00:17:14.073 "data_offset": 2048, 00:17:14.073 "data_size": 63488 00:17:14.073 } 00:17:14.073 ] 00:17:14.073 }' 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.073 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.330 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.330 06:26:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.589 [2024-11-26 06:26:58.684861] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:14.589 [2024-11-26 06:26:58.684952] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:14.589 [2024-11-26 06:26:58.685098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.157 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.425 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.425 "name": "raid_bdev1", 00:17:15.425 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:15.425 "strip_size_kb": 0, 00:17:15.425 "state": "online", 00:17:15.425 "raid_level": "raid1", 00:17:15.425 "superblock": true, 00:17:15.425 "num_base_bdevs": 4, 00:17:15.425 "num_base_bdevs_discovered": 3, 00:17:15.425 "num_base_bdevs_operational": 3, 00:17:15.425 "base_bdevs_list": [ 00:17:15.425 { 00:17:15.425 "name": "spare", 00:17:15.425 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:15.425 "is_configured": true, 00:17:15.425 "data_offset": 2048, 00:17:15.425 "data_size": 63488 00:17:15.425 }, 00:17:15.425 { 00:17:15.425 "name": null, 00:17:15.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.425 "is_configured": false, 00:17:15.425 "data_offset": 0, 00:17:15.425 "data_size": 63488 00:17:15.425 }, 00:17:15.425 { 00:17:15.425 "name": "BaseBdev3", 00:17:15.425 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:15.425 "is_configured": true, 00:17:15.425 "data_offset": 2048, 00:17:15.425 "data_size": 63488 00:17:15.425 }, 00:17:15.425 { 00:17:15.425 "name": "BaseBdev4", 00:17:15.425 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:15.425 "is_configured": true, 00:17:15.425 "data_offset": 2048, 00:17:15.425 "data_size": 63488 00:17:15.425 } 00:17:15.425 ] 00:17:15.426 }' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.426 "name": "raid_bdev1", 00:17:15.426 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:15.426 "strip_size_kb": 0, 00:17:15.426 "state": "online", 00:17:15.426 "raid_level": "raid1", 00:17:15.426 "superblock": true, 00:17:15.426 "num_base_bdevs": 4, 00:17:15.426 "num_base_bdevs_discovered": 3, 00:17:15.426 "num_base_bdevs_operational": 3, 00:17:15.426 "base_bdevs_list": [ 00:17:15.426 { 00:17:15.426 "name": "spare", 00:17:15.426 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:15.426 "is_configured": true, 00:17:15.426 "data_offset": 2048, 00:17:15.426 "data_size": 63488 00:17:15.426 }, 00:17:15.426 { 00:17:15.426 "name": null, 00:17:15.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.426 "is_configured": false, 00:17:15.426 "data_offset": 0, 00:17:15.426 "data_size": 63488 00:17:15.426 }, 00:17:15.426 { 00:17:15.426 "name": "BaseBdev3", 00:17:15.426 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:15.426 "is_configured": true, 00:17:15.426 "data_offset": 2048, 00:17:15.426 "data_size": 63488 00:17:15.426 }, 00:17:15.426 { 00:17:15.426 "name": "BaseBdev4", 00:17:15.426 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:15.426 "is_configured": true, 00:17:15.426 "data_offset": 2048, 00:17:15.426 "data_size": 63488 00:17:15.426 } 00:17:15.426 ] 00:17:15.426 }' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.426 "name": "raid_bdev1", 00:17:15.426 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:15.426 "strip_size_kb": 0, 00:17:15.426 "state": "online", 00:17:15.426 "raid_level": "raid1", 00:17:15.426 "superblock": true, 00:17:15.426 "num_base_bdevs": 4, 00:17:15.426 "num_base_bdevs_discovered": 3, 00:17:15.426 "num_base_bdevs_operational": 3, 00:17:15.426 "base_bdevs_list": [ 00:17:15.426 { 00:17:15.426 "name": "spare", 00:17:15.426 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:15.426 "is_configured": true, 00:17:15.426 "data_offset": 2048, 00:17:15.426 "data_size": 63488 00:17:15.426 }, 00:17:15.426 { 00:17:15.426 "name": null, 00:17:15.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.426 "is_configured": false, 00:17:15.426 "data_offset": 0, 00:17:15.426 "data_size": 63488 00:17:15.426 }, 00:17:15.426 { 00:17:15.426 "name": "BaseBdev3", 00:17:15.426 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:15.426 "is_configured": true, 00:17:15.426 "data_offset": 2048, 00:17:15.426 "data_size": 63488 00:17:15.426 }, 00:17:15.426 { 00:17:15.426 "name": "BaseBdev4", 00:17:15.426 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:15.426 "is_configured": true, 00:17:15.426 "data_offset": 2048, 00:17:15.426 "data_size": 63488 00:17:15.426 } 00:17:15.426 ] 00:17:15.426 }' 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.426 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 [2024-11-26 06:26:59.841100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.990 [2024-11-26 06:26:59.841148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.990 [2024-11-26 06:26:59.841242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.990 [2024-11-26 06:26:59.841323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.990 [2024-11-26 06:26:59.841335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:15.990 06:26:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:15.990 /dev/nbd0 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.249 1+0 records in 00:17:16.249 1+0 records out 00:17:16.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352925 s, 11.6 MB/s 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.249 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:16.507 /dev/nbd1 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.507 1+0 records in 00:17:16.507 1+0 records out 00:17:16.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048619 s, 8.4 MB/s 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.507 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.766 06:27:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.024 [2024-11-26 06:27:01.120976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:17.024 [2024-11-26 06:27:01.121042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.024 [2024-11-26 06:27:01.121075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:17.024 [2024-11-26 06:27:01.121089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.024 [2024-11-26 06:27:01.123408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.024 [2024-11-26 06:27:01.123445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:17.024 spare 00:17:17.024 [2024-11-26 06:27:01.123538] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:17.024 [2024-11-26 06:27:01.123597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:17.024 [2024-11-26 06:27:01.123739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.024 [2024-11-26 06:27:01.123821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.024 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.283 [2024-11-26 06:27:01.223724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:17.283 [2024-11-26 06:27:01.223777] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:17.283 [2024-11-26 06:27:01.224216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:17.283 [2024-11-26 06:27:01.224511] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:17.283 [2024-11-26 06:27:01.224541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:17.283 [2024-11-26 06:27:01.224836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.283 "name": "raid_bdev1", 00:17:17.283 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:17.283 "strip_size_kb": 0, 00:17:17.283 "state": "online", 00:17:17.283 "raid_level": "raid1", 00:17:17.283 "superblock": true, 00:17:17.283 "num_base_bdevs": 4, 00:17:17.283 "num_base_bdevs_discovered": 3, 00:17:17.283 "num_base_bdevs_operational": 3, 00:17:17.283 "base_bdevs_list": [ 00:17:17.283 { 00:17:17.283 "name": "spare", 00:17:17.283 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:17.283 "is_configured": true, 00:17:17.283 "data_offset": 2048, 00:17:17.283 "data_size": 63488 00:17:17.283 }, 00:17:17.283 { 00:17:17.283 "name": null, 00:17:17.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.283 "is_configured": false, 00:17:17.283 "data_offset": 2048, 00:17:17.283 "data_size": 63488 00:17:17.283 }, 00:17:17.283 { 00:17:17.283 "name": "BaseBdev3", 00:17:17.283 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:17.283 "is_configured": true, 00:17:17.283 "data_offset": 2048, 00:17:17.283 "data_size": 63488 00:17:17.283 }, 00:17:17.283 { 00:17:17.283 "name": "BaseBdev4", 00:17:17.283 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:17.283 "is_configured": true, 00:17:17.283 "data_offset": 2048, 00:17:17.283 "data_size": 63488 00:17:17.283 } 00:17:17.283 ] 00:17:17.283 }' 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.283 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.849 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.849 "name": "raid_bdev1", 00:17:17.849 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:17.849 "strip_size_kb": 0, 00:17:17.849 "state": "online", 00:17:17.849 "raid_level": "raid1", 00:17:17.849 "superblock": true, 00:17:17.849 "num_base_bdevs": 4, 00:17:17.849 "num_base_bdevs_discovered": 3, 00:17:17.849 "num_base_bdevs_operational": 3, 00:17:17.849 "base_bdevs_list": [ 00:17:17.849 { 00:17:17.849 "name": "spare", 00:17:17.849 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:17.849 "is_configured": true, 00:17:17.849 "data_offset": 2048, 00:17:17.849 "data_size": 63488 00:17:17.849 }, 00:17:17.849 { 00:17:17.849 "name": null, 00:17:17.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.849 "is_configured": false, 00:17:17.849 "data_offset": 2048, 00:17:17.849 "data_size": 63488 00:17:17.849 }, 00:17:17.849 { 00:17:17.849 "name": "BaseBdev3", 00:17:17.849 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:17.849 "is_configured": true, 00:17:17.849 "data_offset": 2048, 00:17:17.849 "data_size": 63488 00:17:17.849 }, 00:17:17.849 { 00:17:17.849 "name": "BaseBdev4", 00:17:17.849 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:17.849 "is_configured": true, 00:17:17.849 "data_offset": 2048, 00:17:17.849 "data_size": 63488 00:17:17.850 } 00:17:17.850 ] 00:17:17.850 }' 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.850 [2024-11-26 06:27:01.915763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.850 "name": "raid_bdev1", 00:17:17.850 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:17.850 "strip_size_kb": 0, 00:17:17.850 "state": "online", 00:17:17.850 "raid_level": "raid1", 00:17:17.850 "superblock": true, 00:17:17.850 "num_base_bdevs": 4, 00:17:17.850 "num_base_bdevs_discovered": 2, 00:17:17.850 "num_base_bdevs_operational": 2, 00:17:17.850 "base_bdevs_list": [ 00:17:17.850 { 00:17:17.850 "name": null, 00:17:17.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.850 "is_configured": false, 00:17:17.850 "data_offset": 0, 00:17:17.850 "data_size": 63488 00:17:17.850 }, 00:17:17.850 { 00:17:17.850 "name": null, 00:17:17.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.850 "is_configured": false, 00:17:17.850 "data_offset": 2048, 00:17:17.850 "data_size": 63488 00:17:17.850 }, 00:17:17.850 { 00:17:17.850 "name": "BaseBdev3", 00:17:17.850 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:17.850 "is_configured": true, 00:17:17.850 "data_offset": 2048, 00:17:17.850 "data_size": 63488 00:17:17.850 }, 00:17:17.850 { 00:17:17.850 "name": "BaseBdev4", 00:17:17.850 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:17.850 "is_configured": true, 00:17:17.850 "data_offset": 2048, 00:17:17.850 "data_size": 63488 00:17:17.850 } 00:17:17.850 ] 00:17:17.850 }' 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.850 06:27:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.417 06:27:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:18.417 06:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.417 06:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.417 [2024-11-26 06:27:02.399110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.417 [2024-11-26 06:27:02.399344] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:18.417 [2024-11-26 06:27:02.399369] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:18.417 [2024-11-26 06:27:02.399406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.417 [2024-11-26 06:27:02.413818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:17:18.417 06:27:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.417 06:27:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:18.417 [2024-11-26 06:27:02.415719] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.355 "name": "raid_bdev1", 00:17:19.355 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:19.355 "strip_size_kb": 0, 00:17:19.355 "state": "online", 00:17:19.355 "raid_level": "raid1", 00:17:19.355 "superblock": true, 00:17:19.355 "num_base_bdevs": 4, 00:17:19.355 "num_base_bdevs_discovered": 3, 00:17:19.355 "num_base_bdevs_operational": 3, 00:17:19.355 "process": { 00:17:19.355 "type": "rebuild", 00:17:19.355 "target": "spare", 00:17:19.355 "progress": { 00:17:19.355 "blocks": 20480, 00:17:19.355 "percent": 32 00:17:19.355 } 00:17:19.355 }, 00:17:19.355 "base_bdevs_list": [ 00:17:19.355 { 00:17:19.355 "name": "spare", 00:17:19.355 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:19.355 "is_configured": true, 00:17:19.355 "data_offset": 2048, 00:17:19.355 "data_size": 63488 00:17:19.355 }, 00:17:19.355 { 00:17:19.355 "name": null, 00:17:19.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.355 "is_configured": false, 00:17:19.355 "data_offset": 2048, 00:17:19.355 "data_size": 63488 00:17:19.355 }, 00:17:19.355 { 00:17:19.355 "name": "BaseBdev3", 00:17:19.355 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:19.355 "is_configured": true, 00:17:19.355 "data_offset": 2048, 00:17:19.355 "data_size": 63488 00:17:19.355 }, 00:17:19.355 { 00:17:19.355 "name": "BaseBdev4", 00:17:19.355 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:19.355 "is_configured": true, 00:17:19.355 "data_offset": 2048, 00:17:19.355 "data_size": 63488 00:17:19.355 } 00:17:19.355 ] 00:17:19.355 }' 00:17:19.355 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.616 [2024-11-26 06:27:03.555283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.616 [2024-11-26 06:27:03.621511] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:19.616 [2024-11-26 06:27:03.621584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.616 [2024-11-26 06:27:03.621604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.616 [2024-11-26 06:27:03.621611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.616 "name": "raid_bdev1", 00:17:19.616 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:19.616 "strip_size_kb": 0, 00:17:19.616 "state": "online", 00:17:19.616 "raid_level": "raid1", 00:17:19.616 "superblock": true, 00:17:19.616 "num_base_bdevs": 4, 00:17:19.616 "num_base_bdevs_discovered": 2, 00:17:19.616 "num_base_bdevs_operational": 2, 00:17:19.616 "base_bdevs_list": [ 00:17:19.616 { 00:17:19.616 "name": null, 00:17:19.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.616 "is_configured": false, 00:17:19.616 "data_offset": 0, 00:17:19.616 "data_size": 63488 00:17:19.616 }, 00:17:19.616 { 00:17:19.616 "name": null, 00:17:19.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.616 "is_configured": false, 00:17:19.616 "data_offset": 2048, 00:17:19.616 "data_size": 63488 00:17:19.616 }, 00:17:19.616 { 00:17:19.616 "name": "BaseBdev3", 00:17:19.616 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:19.616 "is_configured": true, 00:17:19.616 "data_offset": 2048, 00:17:19.616 "data_size": 63488 00:17:19.616 }, 00:17:19.616 { 00:17:19.616 "name": "BaseBdev4", 00:17:19.616 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:19.616 "is_configured": true, 00:17:19.616 "data_offset": 2048, 00:17:19.616 "data_size": 63488 00:17:19.616 } 00:17:19.616 ] 00:17:19.616 }' 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.616 06:27:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.187 06:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.187 06:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.187 06:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.187 [2024-11-26 06:27:04.138533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.187 [2024-11-26 06:27:04.138614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.187 [2024-11-26 06:27:04.138638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:20.187 [2024-11-26 06:27:04.138648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.187 [2024-11-26 06:27:04.139192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.187 [2024-11-26 06:27:04.139219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.187 [2024-11-26 06:27:04.139320] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.187 [2024-11-26 06:27:04.139349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:20.187 [2024-11-26 06:27:04.139365] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:20.187 [2024-11-26 06:27:04.139394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.187 [2024-11-26 06:27:04.153413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:17:20.187 spare 00:17:20.187 06:27:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.187 [2024-11-26 06:27:04.155268] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.187 06:27:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.127 "name": "raid_bdev1", 00:17:21.127 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:21.127 "strip_size_kb": 0, 00:17:21.127 "state": "online", 00:17:21.127 "raid_level": "raid1", 00:17:21.127 "superblock": true, 00:17:21.127 "num_base_bdevs": 4, 00:17:21.127 "num_base_bdevs_discovered": 3, 00:17:21.127 "num_base_bdevs_operational": 3, 00:17:21.127 "process": { 00:17:21.127 "type": "rebuild", 00:17:21.127 "target": "spare", 00:17:21.127 "progress": { 00:17:21.127 "blocks": 20480, 00:17:21.127 "percent": 32 00:17:21.127 } 00:17:21.127 }, 00:17:21.127 "base_bdevs_list": [ 00:17:21.127 { 00:17:21.127 "name": "spare", 00:17:21.127 "uuid": "d9824cc2-dd8e-5ac0-9be8-e4c271ee380f", 00:17:21.127 "is_configured": true, 00:17:21.127 "data_offset": 2048, 00:17:21.127 "data_size": 63488 00:17:21.127 }, 00:17:21.127 { 00:17:21.127 "name": null, 00:17:21.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.127 "is_configured": false, 00:17:21.127 "data_offset": 2048, 00:17:21.127 "data_size": 63488 00:17:21.127 }, 00:17:21.127 { 00:17:21.127 "name": "BaseBdev3", 00:17:21.127 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:21.127 "is_configured": true, 00:17:21.127 "data_offset": 2048, 00:17:21.127 "data_size": 63488 00:17:21.127 }, 00:17:21.127 { 00:17:21.127 "name": "BaseBdev4", 00:17:21.127 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:21.127 "is_configured": true, 00:17:21.127 "data_offset": 2048, 00:17:21.127 "data_size": 63488 00:17:21.127 } 00:17:21.127 ] 00:17:21.127 }' 00:17:21.127 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 [2024-11-26 06:27:05.294885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.390 [2024-11-26 06:27:05.361282] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.390 [2024-11-26 06:27:05.361375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.390 [2024-11-26 06:27:05.361393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.390 [2024-11-26 06:27:05.361404] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.390 "name": "raid_bdev1", 00:17:21.390 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:21.390 "strip_size_kb": 0, 00:17:21.390 "state": "online", 00:17:21.390 "raid_level": "raid1", 00:17:21.390 "superblock": true, 00:17:21.390 "num_base_bdevs": 4, 00:17:21.390 "num_base_bdevs_discovered": 2, 00:17:21.390 "num_base_bdevs_operational": 2, 00:17:21.390 "base_bdevs_list": [ 00:17:21.390 { 00:17:21.390 "name": null, 00:17:21.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.390 "is_configured": false, 00:17:21.390 "data_offset": 0, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": null, 00:17:21.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.390 "is_configured": false, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": "BaseBdev3", 00:17:21.390 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:21.390 "is_configured": true, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 }, 00:17:21.390 { 00:17:21.390 "name": "BaseBdev4", 00:17:21.390 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:21.390 "is_configured": true, 00:17:21.390 "data_offset": 2048, 00:17:21.390 "data_size": 63488 00:17:21.390 } 00:17:21.390 ] 00:17:21.390 }' 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.390 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.961 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.961 "name": "raid_bdev1", 00:17:21.961 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:21.961 "strip_size_kb": 0, 00:17:21.961 "state": "online", 00:17:21.961 "raid_level": "raid1", 00:17:21.961 "superblock": true, 00:17:21.961 "num_base_bdevs": 4, 00:17:21.961 "num_base_bdevs_discovered": 2, 00:17:21.961 "num_base_bdevs_operational": 2, 00:17:21.961 "base_bdevs_list": [ 00:17:21.961 { 00:17:21.961 "name": null, 00:17:21.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.961 "is_configured": false, 00:17:21.961 "data_offset": 0, 00:17:21.961 "data_size": 63488 00:17:21.961 }, 00:17:21.961 { 00:17:21.961 "name": null, 00:17:21.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.961 "is_configured": false, 00:17:21.961 "data_offset": 2048, 00:17:21.961 "data_size": 63488 00:17:21.961 }, 00:17:21.961 { 00:17:21.961 "name": "BaseBdev3", 00:17:21.961 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:21.961 "is_configured": true, 00:17:21.961 "data_offset": 2048, 00:17:21.961 "data_size": 63488 00:17:21.961 }, 00:17:21.961 { 00:17:21.961 "name": "BaseBdev4", 00:17:21.962 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:21.962 "is_configured": true, 00:17:21.962 "data_offset": 2048, 00:17:21.962 "data_size": 63488 00:17:21.962 } 00:17:21.962 ] 00:17:21.962 }' 00:17:21.962 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.962 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.962 06:27:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.962 [2024-11-26 06:27:06.042278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:21.962 [2024-11-26 06:27:06.042365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.962 [2024-11-26 06:27:06.042387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:21.962 [2024-11-26 06:27:06.042398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.962 [2024-11-26 06:27:06.042904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.962 [2024-11-26 06:27:06.042936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:21.962 [2024-11-26 06:27:06.043027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:21.962 [2024-11-26 06:27:06.043046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:21.962 [2024-11-26 06:27:06.043082] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:21.962 [2024-11-26 06:27:06.043110] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:21.962 BaseBdev1 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.962 06:27:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.340 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.341 "name": "raid_bdev1", 00:17:23.341 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:23.341 "strip_size_kb": 0, 00:17:23.341 "state": "online", 00:17:23.341 "raid_level": "raid1", 00:17:23.341 "superblock": true, 00:17:23.341 "num_base_bdevs": 4, 00:17:23.341 "num_base_bdevs_discovered": 2, 00:17:23.341 "num_base_bdevs_operational": 2, 00:17:23.341 "base_bdevs_list": [ 00:17:23.341 { 00:17:23.341 "name": null, 00:17:23.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.341 "is_configured": false, 00:17:23.341 "data_offset": 0, 00:17:23.341 "data_size": 63488 00:17:23.341 }, 00:17:23.341 { 00:17:23.341 "name": null, 00:17:23.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.341 "is_configured": false, 00:17:23.341 "data_offset": 2048, 00:17:23.341 "data_size": 63488 00:17:23.341 }, 00:17:23.341 { 00:17:23.341 "name": "BaseBdev3", 00:17:23.341 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:23.341 "is_configured": true, 00:17:23.341 "data_offset": 2048, 00:17:23.341 "data_size": 63488 00:17:23.341 }, 00:17:23.341 { 00:17:23.341 "name": "BaseBdev4", 00:17:23.341 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:23.341 "is_configured": true, 00:17:23.341 "data_offset": 2048, 00:17:23.341 "data_size": 63488 00:17:23.341 } 00:17:23.341 ] 00:17:23.341 }' 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.341 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.600 "name": "raid_bdev1", 00:17:23.600 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:23.600 "strip_size_kb": 0, 00:17:23.600 "state": "online", 00:17:23.600 "raid_level": "raid1", 00:17:23.600 "superblock": true, 00:17:23.600 "num_base_bdevs": 4, 00:17:23.600 "num_base_bdevs_discovered": 2, 00:17:23.600 "num_base_bdevs_operational": 2, 00:17:23.600 "base_bdevs_list": [ 00:17:23.600 { 00:17:23.600 "name": null, 00:17:23.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.600 "is_configured": false, 00:17:23.600 "data_offset": 0, 00:17:23.600 "data_size": 63488 00:17:23.600 }, 00:17:23.600 { 00:17:23.600 "name": null, 00:17:23.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.600 "is_configured": false, 00:17:23.600 "data_offset": 2048, 00:17:23.600 "data_size": 63488 00:17:23.600 }, 00:17:23.600 { 00:17:23.600 "name": "BaseBdev3", 00:17:23.600 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:23.600 "is_configured": true, 00:17:23.600 "data_offset": 2048, 00:17:23.600 "data_size": 63488 00:17:23.600 }, 00:17:23.600 { 00:17:23.600 "name": "BaseBdev4", 00:17:23.600 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:23.600 "is_configured": true, 00:17:23.600 "data_offset": 2048, 00:17:23.600 "data_size": 63488 00:17:23.600 } 00:17:23.600 ] 00:17:23.600 }' 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.600 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.601 [2024-11-26 06:27:07.663650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:23.601 [2024-11-26 06:27:07.663893] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:23.601 [2024-11-26 06:27:07.663909] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:23.601 request: 00:17:23.601 { 00:17:23.601 "base_bdev": "BaseBdev1", 00:17:23.601 "raid_bdev": "raid_bdev1", 00:17:23.601 "method": "bdev_raid_add_base_bdev", 00:17:23.601 "req_id": 1 00:17:23.601 } 00:17:23.601 Got JSON-RPC error response 00:17:23.601 response: 00:17:23.601 { 00:17:23.601 "code": -22, 00:17:23.601 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:23.601 } 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.601 06:27:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.980 "name": "raid_bdev1", 00:17:24.980 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:24.980 "strip_size_kb": 0, 00:17:24.980 "state": "online", 00:17:24.980 "raid_level": "raid1", 00:17:24.980 "superblock": true, 00:17:24.980 "num_base_bdevs": 4, 00:17:24.980 "num_base_bdevs_discovered": 2, 00:17:24.980 "num_base_bdevs_operational": 2, 00:17:24.980 "base_bdevs_list": [ 00:17:24.980 { 00:17:24.980 "name": null, 00:17:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.980 "is_configured": false, 00:17:24.980 "data_offset": 0, 00:17:24.980 "data_size": 63488 00:17:24.980 }, 00:17:24.980 { 00:17:24.980 "name": null, 00:17:24.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.980 "is_configured": false, 00:17:24.980 "data_offset": 2048, 00:17:24.980 "data_size": 63488 00:17:24.980 }, 00:17:24.980 { 00:17:24.980 "name": "BaseBdev3", 00:17:24.980 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:24.980 "is_configured": true, 00:17:24.980 "data_offset": 2048, 00:17:24.980 "data_size": 63488 00:17:24.980 }, 00:17:24.980 { 00:17:24.980 "name": "BaseBdev4", 00:17:24.980 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:24.980 "is_configured": true, 00:17:24.980 "data_offset": 2048, 00:17:24.980 "data_size": 63488 00:17:24.980 } 00:17:24.980 ] 00:17:24.980 }' 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.980 06:27:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.239 "name": "raid_bdev1", 00:17:25.239 "uuid": "68b4c3db-ff03-4eaa-96f6-47c96cdb3d3e", 00:17:25.239 "strip_size_kb": 0, 00:17:25.239 "state": "online", 00:17:25.239 "raid_level": "raid1", 00:17:25.239 "superblock": true, 00:17:25.239 "num_base_bdevs": 4, 00:17:25.239 "num_base_bdevs_discovered": 2, 00:17:25.239 "num_base_bdevs_operational": 2, 00:17:25.239 "base_bdevs_list": [ 00:17:25.239 { 00:17:25.239 "name": null, 00:17:25.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.239 "is_configured": false, 00:17:25.239 "data_offset": 0, 00:17:25.239 "data_size": 63488 00:17:25.239 }, 00:17:25.239 { 00:17:25.239 "name": null, 00:17:25.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.239 "is_configured": false, 00:17:25.239 "data_offset": 2048, 00:17:25.239 "data_size": 63488 00:17:25.239 }, 00:17:25.239 { 00:17:25.239 "name": "BaseBdev3", 00:17:25.239 "uuid": "7158c6ab-a2e1-5e0f-904b-c341822c0823", 00:17:25.239 "is_configured": true, 00:17:25.239 "data_offset": 2048, 00:17:25.239 "data_size": 63488 00:17:25.239 }, 00:17:25.239 { 00:17:25.239 "name": "BaseBdev4", 00:17:25.239 "uuid": "0ae564be-3b35-517d-90dc-0108b41869fe", 00:17:25.239 "is_configured": true, 00:17:25.239 "data_offset": 2048, 00:17:25.239 "data_size": 63488 00:17:25.239 } 00:17:25.239 ] 00:17:25.239 }' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78529 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78529 ']' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78529 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78529 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.239 killing process with pid 78529 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78529' 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78529 00:17:25.239 Received shutdown signal, test time was about 60.000000 seconds 00:17:25.239 00:17:25.239 Latency(us) 00:17:25.239 [2024-11-26T06:27:09.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.239 [2024-11-26T06:27:09.376Z] =================================================================================================================== 00:17:25.239 [2024-11-26T06:27:09.376Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:25.239 [2024-11-26 06:27:09.281775] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.239 06:27:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78529 00:17:25.239 [2024-11-26 06:27:09.281955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.239 [2024-11-26 06:27:09.282088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.239 [2024-11-26 06:27:09.282110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:25.805 [2024-11-26 06:27:09.833650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:27.180 00:17:27.180 real 0m26.286s 00:17:27.180 user 0m31.341s 00:17:27.180 sys 0m4.162s 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.180 ************************************ 00:17:27.180 END TEST raid_rebuild_test_sb 00:17:27.180 ************************************ 00:17:27.180 06:27:11 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:27.180 06:27:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:27.180 06:27:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.180 06:27:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.180 ************************************ 00:17:27.180 START TEST raid_rebuild_test_io 00:17:27.180 ************************************ 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:27.180 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79295 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79295 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79295 ']' 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.181 06:27:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.181 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:27.181 Zero copy mechanism will not be used. 00:17:27.181 [2024-11-26 06:27:11.254869] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:17:27.181 [2024-11-26 06:27:11.254999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79295 ] 00:17:27.441 [2024-11-26 06:27:11.438662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.700 [2024-11-26 06:27:11.575029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.700 [2024-11-26 06:27:11.818106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.700 [2024-11-26 06:27:11.818202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 BaseBdev1_malloc 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 [2024-11-26 06:27:12.163715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.268 [2024-11-26 06:27:12.163801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.268 [2024-11-26 06:27:12.163832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.268 [2024-11-26 06:27:12.163847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.268 [2024-11-26 06:27:12.166504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.268 [2024-11-26 06:27:12.166545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.268 BaseBdev1 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 BaseBdev2_malloc 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 [2024-11-26 06:27:12.226930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:28.268 [2024-11-26 06:27:12.227020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.268 [2024-11-26 06:27:12.227042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.268 [2024-11-26 06:27:12.227054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.268 [2024-11-26 06:27:12.229487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.268 [2024-11-26 06:27:12.229527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.268 BaseBdev2 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 BaseBdev3_malloc 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.268 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.268 [2024-11-26 06:27:12.301861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:28.268 [2024-11-26 06:27:12.301924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.268 [2024-11-26 06:27:12.301949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:28.268 [2024-11-26 06:27:12.301961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.268 [2024-11-26 06:27:12.304406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.268 [2024-11-26 06:27:12.304446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:28.268 BaseBdev3 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.269 BaseBdev4_malloc 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.269 [2024-11-26 06:27:12.363899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:28.269 [2024-11-26 06:27:12.363983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.269 [2024-11-26 06:27:12.364006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:28.269 [2024-11-26 06:27:12.364019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.269 [2024-11-26 06:27:12.366619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.269 [2024-11-26 06:27:12.366677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:28.269 BaseBdev4 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.269 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.528 spare_malloc 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.528 spare_delay 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.528 [2024-11-26 06:27:12.437473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.528 [2024-11-26 06:27:12.437539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.528 [2024-11-26 06:27:12.437559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:28.528 [2024-11-26 06:27:12.437571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.528 [2024-11-26 06:27:12.440093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.528 [2024-11-26 06:27:12.440134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.528 spare 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.528 [2024-11-26 06:27:12.449510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.528 [2024-11-26 06:27:12.451658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.528 [2024-11-26 06:27:12.451732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.528 [2024-11-26 06:27:12.451786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:28.528 [2024-11-26 06:27:12.451862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.528 [2024-11-26 06:27:12.451875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:28.528 [2024-11-26 06:27:12.452222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:28.528 [2024-11-26 06:27:12.452462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.528 [2024-11-26 06:27:12.452487] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.528 [2024-11-26 06:27:12.452661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.528 "name": "raid_bdev1", 00:17:28.528 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:28.528 "strip_size_kb": 0, 00:17:28.528 "state": "online", 00:17:28.528 "raid_level": "raid1", 00:17:28.528 "superblock": false, 00:17:28.528 "num_base_bdevs": 4, 00:17:28.528 "num_base_bdevs_discovered": 4, 00:17:28.528 "num_base_bdevs_operational": 4, 00:17:28.528 "base_bdevs_list": [ 00:17:28.528 { 00:17:28.528 "name": "BaseBdev1", 00:17:28.528 "uuid": "7cb93094-3c80-5630-a44a-4b7815aaaba2", 00:17:28.528 "is_configured": true, 00:17:28.528 "data_offset": 0, 00:17:28.528 "data_size": 65536 00:17:28.528 }, 00:17:28.528 { 00:17:28.528 "name": "BaseBdev2", 00:17:28.528 "uuid": "0d5fc44c-5926-55ba-aad6-a39c1d413d41", 00:17:28.528 "is_configured": true, 00:17:28.528 "data_offset": 0, 00:17:28.528 "data_size": 65536 00:17:28.528 }, 00:17:28.528 { 00:17:28.528 "name": "BaseBdev3", 00:17:28.528 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:28.528 "is_configured": true, 00:17:28.528 "data_offset": 0, 00:17:28.528 "data_size": 65536 00:17:28.528 }, 00:17:28.528 { 00:17:28.528 "name": "BaseBdev4", 00:17:28.528 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:28.528 "is_configured": true, 00:17:28.528 "data_offset": 0, 00:17:28.528 "data_size": 65536 00:17:28.528 } 00:17:28.528 ] 00:17:28.528 }' 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.528 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:29.096 [2024-11-26 06:27:12.937077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 06:27:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 [2024-11-26 06:27:13.028528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.096 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.096 "name": "raid_bdev1", 00:17:29.096 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:29.097 "strip_size_kb": 0, 00:17:29.097 "state": "online", 00:17:29.097 "raid_level": "raid1", 00:17:29.097 "superblock": false, 00:17:29.097 "num_base_bdevs": 4, 00:17:29.097 "num_base_bdevs_discovered": 3, 00:17:29.097 "num_base_bdevs_operational": 3, 00:17:29.097 "base_bdevs_list": [ 00:17:29.097 { 00:17:29.097 "name": null, 00:17:29.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.097 "is_configured": false, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 }, 00:17:29.097 { 00:17:29.097 "name": "BaseBdev2", 00:17:29.097 "uuid": "0d5fc44c-5926-55ba-aad6-a39c1d413d41", 00:17:29.097 "is_configured": true, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 }, 00:17:29.097 { 00:17:29.097 "name": "BaseBdev3", 00:17:29.097 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:29.097 "is_configured": true, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 }, 00:17:29.097 { 00:17:29.097 "name": "BaseBdev4", 00:17:29.097 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:29.097 "is_configured": true, 00:17:29.097 "data_offset": 0, 00:17:29.097 "data_size": 65536 00:17:29.097 } 00:17:29.097 ] 00:17:29.097 }' 00:17:29.097 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.097 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.097 [2024-11-26 06:27:13.130621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:29.097 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:29.097 Zero copy mechanism will not be used. 00:17:29.097 Running I/O for 60 seconds... 00:17:29.356 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.356 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.356 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.356 [2024-11-26 06:27:13.451637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.614 06:27:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.614 06:27:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:29.614 [2024-11-26 06:27:13.541279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:29.614 [2024-11-26 06:27:13.543963] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.614 [2024-11-26 06:27:13.659643] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:29.614 [2024-11-26 06:27:13.660704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:29.873 [2024-11-26 06:27:13.782939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:29.873 [2024-11-26 06:27:13.783569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:30.133 [2024-11-26 06:27:14.032386] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:30.392 153.00 IOPS, 459.00 MiB/s [2024-11-26T06:27:14.529Z] 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.392 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.693 "name": "raid_bdev1", 00:17:30.693 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:30.693 "strip_size_kb": 0, 00:17:30.693 "state": "online", 00:17:30.693 "raid_level": "raid1", 00:17:30.693 "superblock": false, 00:17:30.693 "num_base_bdevs": 4, 00:17:30.693 "num_base_bdevs_discovered": 4, 00:17:30.693 "num_base_bdevs_operational": 4, 00:17:30.693 "process": { 00:17:30.693 "type": "rebuild", 00:17:30.693 "target": "spare", 00:17:30.693 "progress": { 00:17:30.693 "blocks": 14336, 00:17:30.693 "percent": 21 00:17:30.693 } 00:17:30.693 }, 00:17:30.693 "base_bdevs_list": [ 00:17:30.693 { 00:17:30.693 "name": "spare", 00:17:30.693 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:30.693 "is_configured": true, 00:17:30.693 "data_offset": 0, 00:17:30.693 "data_size": 65536 00:17:30.693 }, 00:17:30.693 { 00:17:30.693 "name": "BaseBdev2", 00:17:30.693 "uuid": "0d5fc44c-5926-55ba-aad6-a39c1d413d41", 00:17:30.693 "is_configured": true, 00:17:30.693 "data_offset": 0, 00:17:30.693 "data_size": 65536 00:17:30.693 }, 00:17:30.693 { 00:17:30.693 "name": "BaseBdev3", 00:17:30.693 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:30.693 "is_configured": true, 00:17:30.693 "data_offset": 0, 00:17:30.693 "data_size": 65536 00:17:30.693 }, 00:17:30.693 { 00:17:30.693 "name": "BaseBdev4", 00:17:30.693 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:30.693 "is_configured": true, 00:17:30.693 "data_offset": 0, 00:17:30.693 "data_size": 65536 00:17:30.693 } 00:17:30.693 ] 00:17:30.693 }' 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.693 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.693 [2024-11-26 06:27:14.662131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.952 [2024-11-26 06:27:14.894866] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.952 [2024-11-26 06:27:14.909959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.952 [2024-11-26 06:27:14.910069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.952 [2024-11-26 06:27:14.910085] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.952 [2024-11-26 06:27:14.936678] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.952 06:27:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.952 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.952 "name": "raid_bdev1", 00:17:30.952 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:30.952 "strip_size_kb": 0, 00:17:30.952 "state": "online", 00:17:30.952 "raid_level": "raid1", 00:17:30.952 "superblock": false, 00:17:30.952 "num_base_bdevs": 4, 00:17:30.952 "num_base_bdevs_discovered": 3, 00:17:30.952 "num_base_bdevs_operational": 3, 00:17:30.952 "base_bdevs_list": [ 00:17:30.952 { 00:17:30.952 "name": null, 00:17:30.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.952 "is_configured": false, 00:17:30.952 "data_offset": 0, 00:17:30.952 "data_size": 65536 00:17:30.952 }, 00:17:30.952 { 00:17:30.952 "name": "BaseBdev2", 00:17:30.952 "uuid": "0d5fc44c-5926-55ba-aad6-a39c1d413d41", 00:17:30.952 "is_configured": true, 00:17:30.952 "data_offset": 0, 00:17:30.952 "data_size": 65536 00:17:30.952 }, 00:17:30.952 { 00:17:30.952 "name": "BaseBdev3", 00:17:30.952 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:30.952 "is_configured": true, 00:17:30.952 "data_offset": 0, 00:17:30.952 "data_size": 65536 00:17:30.952 }, 00:17:30.952 { 00:17:30.952 "name": "BaseBdev4", 00:17:30.952 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:30.952 "is_configured": true, 00:17:30.952 "data_offset": 0, 00:17:30.952 "data_size": 65536 00:17:30.952 } 00:17:30.952 ] 00:17:30.952 }' 00:17:30.952 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.952 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.470 120.50 IOPS, 361.50 MiB/s [2024-11-26T06:27:15.607Z] 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.470 "name": "raid_bdev1", 00:17:31.470 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:31.470 "strip_size_kb": 0, 00:17:31.470 "state": "online", 00:17:31.470 "raid_level": "raid1", 00:17:31.470 "superblock": false, 00:17:31.470 "num_base_bdevs": 4, 00:17:31.470 "num_base_bdevs_discovered": 3, 00:17:31.470 "num_base_bdevs_operational": 3, 00:17:31.470 "base_bdevs_list": [ 00:17:31.470 { 00:17:31.470 "name": null, 00:17:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.470 "is_configured": false, 00:17:31.470 "data_offset": 0, 00:17:31.470 "data_size": 65536 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "name": "BaseBdev2", 00:17:31.470 "uuid": "0d5fc44c-5926-55ba-aad6-a39c1d413d41", 00:17:31.470 "is_configured": true, 00:17:31.470 "data_offset": 0, 00:17:31.470 "data_size": 65536 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "name": "BaseBdev3", 00:17:31.470 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:31.470 "is_configured": true, 00:17:31.470 "data_offset": 0, 00:17:31.470 "data_size": 65536 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "name": "BaseBdev4", 00:17:31.470 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:31.470 "is_configured": true, 00:17:31.470 "data_offset": 0, 00:17:31.470 "data_size": 65536 00:17:31.470 } 00:17:31.470 ] 00:17:31.470 }' 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.470 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.470 [2024-11-26 06:27:15.595798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.728 06:27:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.728 06:27:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:31.728 [2024-11-26 06:27:15.660645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:31.728 [2024-11-26 06:27:15.663090] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.728 [2024-11-26 06:27:15.797496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:31.986 [2024-11-26 06:27:15.928878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:31.987 [2024-11-26 06:27:15.929476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:32.245 126.67 IOPS, 380.00 MiB/s [2024-11-26T06:27:16.382Z] [2024-11-26 06:27:16.264254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:32.245 [2024-11-26 06:27:16.265257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:32.503 [2024-11-26 06:27:16.392462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:32.503 [2024-11-26 06:27:16.393757] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.761 "name": "raid_bdev1", 00:17:32.761 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:32.761 "strip_size_kb": 0, 00:17:32.761 "state": "online", 00:17:32.761 "raid_level": "raid1", 00:17:32.761 "superblock": false, 00:17:32.761 "num_base_bdevs": 4, 00:17:32.761 "num_base_bdevs_discovered": 4, 00:17:32.761 "num_base_bdevs_operational": 4, 00:17:32.761 "process": { 00:17:32.761 "type": "rebuild", 00:17:32.761 "target": "spare", 00:17:32.761 "progress": { 00:17:32.761 "blocks": 12288, 00:17:32.761 "percent": 18 00:17:32.761 } 00:17:32.761 }, 00:17:32.761 "base_bdevs_list": [ 00:17:32.761 { 00:17:32.761 "name": "spare", 00:17:32.761 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:32.761 "is_configured": true, 00:17:32.761 "data_offset": 0, 00:17:32.761 "data_size": 65536 00:17:32.761 }, 00:17:32.761 { 00:17:32.761 "name": "BaseBdev2", 00:17:32.761 "uuid": "0d5fc44c-5926-55ba-aad6-a39c1d413d41", 00:17:32.761 "is_configured": true, 00:17:32.761 "data_offset": 0, 00:17:32.761 "data_size": 65536 00:17:32.761 }, 00:17:32.761 { 00:17:32.761 "name": "BaseBdev3", 00:17:32.761 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:32.761 "is_configured": true, 00:17:32.761 "data_offset": 0, 00:17:32.761 "data_size": 65536 00:17:32.761 }, 00:17:32.761 { 00:17:32.761 "name": "BaseBdev4", 00:17:32.761 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:32.761 "is_configured": true, 00:17:32.761 "data_offset": 0, 00:17:32.761 "data_size": 65536 00:17:32.761 } 00:17:32.761 ] 00:17:32.761 }' 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.761 [2024-11-26 06:27:16.755582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:32.761 [2024-11-26 06:27:16.756568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.761 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:32.762 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:32.762 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:32.762 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:32.762 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:32.762 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.762 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 [2024-11-26 06:27:16.830448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.021 [2024-11-26 06:27:16.901067] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:33.021 [2024-11-26 06:27:16.901128] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.021 "name": "raid_bdev1", 00:17:33.021 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:33.021 "strip_size_kb": 0, 00:17:33.021 "state": "online", 00:17:33.021 "raid_level": "raid1", 00:17:33.021 "superblock": false, 00:17:33.021 "num_base_bdevs": 4, 00:17:33.021 "num_base_bdevs_discovered": 3, 00:17:33.021 "num_base_bdevs_operational": 3, 00:17:33.021 "process": { 00:17:33.021 "type": "rebuild", 00:17:33.021 "target": "spare", 00:17:33.021 "progress": { 00:17:33.021 "blocks": 16384, 00:17:33.021 "percent": 25 00:17:33.021 } 00:17:33.021 }, 00:17:33.021 "base_bdevs_list": [ 00:17:33.021 { 00:17:33.021 "name": "spare", 00:17:33.021 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": null, 00:17:33.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.021 "is_configured": false, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev3", 00:17:33.021 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev4", 00:17:33.021 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 } 00:17:33.021 ] 00:17:33.021 }' 00:17:33.021 06:27:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=510 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.021 "name": "raid_bdev1", 00:17:33.021 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:33.021 "strip_size_kb": 0, 00:17:33.021 "state": "online", 00:17:33.021 "raid_level": "raid1", 00:17:33.021 "superblock": false, 00:17:33.021 "num_base_bdevs": 4, 00:17:33.021 "num_base_bdevs_discovered": 3, 00:17:33.021 "num_base_bdevs_operational": 3, 00:17:33.021 "process": { 00:17:33.021 "type": "rebuild", 00:17:33.021 "target": "spare", 00:17:33.021 "progress": { 00:17:33.021 "blocks": 18432, 00:17:33.021 "percent": 28 00:17:33.021 } 00:17:33.021 }, 00:17:33.021 "base_bdevs_list": [ 00:17:33.021 { 00:17:33.021 "name": "spare", 00:17:33.021 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": null, 00:17:33.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.021 "is_configured": false, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev3", 00:17:33.021 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 }, 00:17:33.021 { 00:17:33.021 "name": "BaseBdev4", 00:17:33.021 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:33.021 "is_configured": true, 00:17:33.021 "data_offset": 0, 00:17:33.021 "data_size": 65536 00:17:33.021 } 00:17:33.021 ] 00:17:33.021 }' 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.021 112.00 IOPS, 336.00 MiB/s [2024-11-26T06:27:17.158Z] 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.021 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.280 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.280 06:27:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.537 [2024-11-26 06:27:17.611251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:34.104 [2024-11-26 06:27:17.936467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:34.104 98.40 IOPS, 295.20 MiB/s [2024-11-26T06:27:18.241Z] [2024-11-26 06:27:18.169261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.104 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.104 "name": "raid_bdev1", 00:17:34.104 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:34.104 "strip_size_kb": 0, 00:17:34.104 "state": "online", 00:17:34.104 "raid_level": "raid1", 00:17:34.104 "superblock": false, 00:17:34.104 "num_base_bdevs": 4, 00:17:34.104 "num_base_bdevs_discovered": 3, 00:17:34.104 "num_base_bdevs_operational": 3, 00:17:34.104 "process": { 00:17:34.104 "type": "rebuild", 00:17:34.104 "target": "spare", 00:17:34.104 "progress": { 00:17:34.104 "blocks": 34816, 00:17:34.104 "percent": 53 00:17:34.104 } 00:17:34.104 }, 00:17:34.104 "base_bdevs_list": [ 00:17:34.104 { 00:17:34.104 "name": "spare", 00:17:34.104 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:34.104 "is_configured": true, 00:17:34.104 "data_offset": 0, 00:17:34.104 "data_size": 65536 00:17:34.104 }, 00:17:34.104 { 00:17:34.104 "name": null, 00:17:34.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.104 "is_configured": false, 00:17:34.104 "data_offset": 0, 00:17:34.104 "data_size": 65536 00:17:34.104 }, 00:17:34.104 { 00:17:34.104 "name": "BaseBdev3", 00:17:34.104 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:34.104 "is_configured": true, 00:17:34.104 "data_offset": 0, 00:17:34.104 "data_size": 65536 00:17:34.104 }, 00:17:34.104 { 00:17:34.104 "name": "BaseBdev4", 00:17:34.104 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:34.104 "is_configured": true, 00:17:34.104 "data_offset": 0, 00:17:34.104 "data_size": 65536 00:17:34.104 } 00:17:34.104 ] 00:17:34.104 }' 00:17:34.363 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.363 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.363 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.363 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.363 06:27:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.622 [2024-11-26 06:27:18.506210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:34.880 [2024-11-26 06:27:18.861236] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:34.880 [2024-11-26 06:27:18.861771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:35.178 [2024-11-26 06:27:19.095337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:35.178 [2024-11-26 06:27:19.095944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:35.455 91.33 IOPS, 274.00 MiB/s [2024-11-26T06:27:19.592Z] 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.455 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.455 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.455 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.456 "name": "raid_bdev1", 00:17:35.456 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:35.456 "strip_size_kb": 0, 00:17:35.456 "state": "online", 00:17:35.456 "raid_level": "raid1", 00:17:35.456 "superblock": false, 00:17:35.456 "num_base_bdevs": 4, 00:17:35.456 "num_base_bdevs_discovered": 3, 00:17:35.456 "num_base_bdevs_operational": 3, 00:17:35.456 "process": { 00:17:35.456 "type": "rebuild", 00:17:35.456 "target": "spare", 00:17:35.456 "progress": { 00:17:35.456 "blocks": 49152, 00:17:35.456 "percent": 75 00:17:35.456 } 00:17:35.456 }, 00:17:35.456 "base_bdevs_list": [ 00:17:35.456 { 00:17:35.456 "name": "spare", 00:17:35.456 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:35.456 "is_configured": true, 00:17:35.456 "data_offset": 0, 00:17:35.456 "data_size": 65536 00:17:35.456 }, 00:17:35.456 { 00:17:35.456 "name": null, 00:17:35.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.456 "is_configured": false, 00:17:35.456 "data_offset": 0, 00:17:35.456 "data_size": 65536 00:17:35.456 }, 00:17:35.456 { 00:17:35.456 "name": "BaseBdev3", 00:17:35.456 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:35.456 "is_configured": true, 00:17:35.456 "data_offset": 0, 00:17:35.456 "data_size": 65536 00:17:35.456 }, 00:17:35.456 { 00:17:35.456 "name": "BaseBdev4", 00:17:35.456 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:35.456 "is_configured": true, 00:17:35.456 "data_offset": 0, 00:17:35.456 "data_size": 65536 00:17:35.456 } 00:17:35.456 ] 00:17:35.456 }' 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.456 06:27:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.715 [2024-11-26 06:27:19.759131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:35.715 [2024-11-26 06:27:19.760137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:36.284 82.57 IOPS, 247.71 MiB/s [2024-11-26T06:27:20.421Z] [2024-11-26 06:27:20.224717] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:36.284 [2024-11-26 06:27:20.324538] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:36.284 [2024-11-26 06:27:20.328165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.543 "name": "raid_bdev1", 00:17:36.543 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:36.543 "strip_size_kb": 0, 00:17:36.543 "state": "online", 00:17:36.543 "raid_level": "raid1", 00:17:36.543 "superblock": false, 00:17:36.543 "num_base_bdevs": 4, 00:17:36.543 "num_base_bdevs_discovered": 3, 00:17:36.543 "num_base_bdevs_operational": 3, 00:17:36.543 "base_bdevs_list": [ 00:17:36.543 { 00:17:36.543 "name": "spare", 00:17:36.543 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": null, 00:17:36.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.543 "is_configured": false, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": "BaseBdev3", 00:17:36.543 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": "BaseBdev4", 00:17:36.543 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 } 00:17:36.543 ] 00:17:36.543 }' 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.543 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.543 "name": "raid_bdev1", 00:17:36.543 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:36.543 "strip_size_kb": 0, 00:17:36.543 "state": "online", 00:17:36.543 "raid_level": "raid1", 00:17:36.543 "superblock": false, 00:17:36.543 "num_base_bdevs": 4, 00:17:36.543 "num_base_bdevs_discovered": 3, 00:17:36.543 "num_base_bdevs_operational": 3, 00:17:36.543 "base_bdevs_list": [ 00:17:36.543 { 00:17:36.543 "name": "spare", 00:17:36.543 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:36.543 "is_configured": true, 00:17:36.543 "data_offset": 0, 00:17:36.543 "data_size": 65536 00:17:36.543 }, 00:17:36.543 { 00:17:36.543 "name": null, 00:17:36.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.544 "is_configured": false, 00:17:36.544 "data_offset": 0, 00:17:36.544 "data_size": 65536 00:17:36.544 }, 00:17:36.544 { 00:17:36.544 "name": "BaseBdev3", 00:17:36.544 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:36.544 "is_configured": true, 00:17:36.544 "data_offset": 0, 00:17:36.544 "data_size": 65536 00:17:36.544 }, 00:17:36.544 { 00:17:36.544 "name": "BaseBdev4", 00:17:36.544 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:36.544 "is_configured": true, 00:17:36.544 "data_offset": 0, 00:17:36.544 "data_size": 65536 00:17:36.544 } 00:17:36.544 ] 00:17:36.544 }' 00:17:36.544 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.544 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.803 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.803 "name": "raid_bdev1", 00:17:36.804 "uuid": "0dca9bf1-1dd4-438f-a660-3a02c2a7ad80", 00:17:36.804 "strip_size_kb": 0, 00:17:36.804 "state": "online", 00:17:36.804 "raid_level": "raid1", 00:17:36.804 "superblock": false, 00:17:36.804 "num_base_bdevs": 4, 00:17:36.804 "num_base_bdevs_discovered": 3, 00:17:36.804 "num_base_bdevs_operational": 3, 00:17:36.804 "base_bdevs_list": [ 00:17:36.804 { 00:17:36.804 "name": "spare", 00:17:36.804 "uuid": "129f42bd-d319-5f43-9f40-4eacde3cc9ba", 00:17:36.804 "is_configured": true, 00:17:36.804 "data_offset": 0, 00:17:36.804 "data_size": 65536 00:17:36.804 }, 00:17:36.804 { 00:17:36.804 "name": null, 00:17:36.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.804 "is_configured": false, 00:17:36.804 "data_offset": 0, 00:17:36.804 "data_size": 65536 00:17:36.804 }, 00:17:36.804 { 00:17:36.804 "name": "BaseBdev3", 00:17:36.804 "uuid": "02527702-21d5-5ebb-af80-4686f7b1ab9c", 00:17:36.804 "is_configured": true, 00:17:36.804 "data_offset": 0, 00:17:36.804 "data_size": 65536 00:17:36.804 }, 00:17:36.804 { 00:17:36.804 "name": "BaseBdev4", 00:17:36.804 "uuid": "84786597-5073-57be-902f-a92f0d473cc8", 00:17:36.804 "is_configured": true, 00:17:36.804 "data_offset": 0, 00:17:36.804 "data_size": 65536 00:17:36.804 } 00:17:36.804 ] 00:17:36.804 }' 00:17:36.804 06:27:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.804 06:27:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.063 76.12 IOPS, 228.38 MiB/s [2024-11-26T06:27:21.200Z] 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.063 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.063 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.063 [2024-11-26 06:27:21.145008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.063 [2024-11-26 06:27:21.145092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.322 00:17:37.322 Latency(us) 00:17:37.322 [2024-11-26T06:27:21.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.322 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:37.322 raid_bdev1 : 8.14 75.57 226.72 0.00 0.00 19349.37 372.04 116304.94 00:17:37.322 [2024-11-26T06:27:21.459Z] =================================================================================================================== 00:17:37.322 [2024-11-26T06:27:21.459Z] Total : 75.57 226.72 0.00 0.00 19349.37 372.04 116304.94 00:17:37.322 [2024-11-26 06:27:21.279179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.322 [2024-11-26 06:27:21.279260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.322 [2024-11-26 06:27:21.279386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.322 [2024-11-26 06:27:21.279402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:37.322 { 00:17:37.322 "results": [ 00:17:37.322 { 00:17:37.322 "job": "raid_bdev1", 00:17:37.322 "core_mask": "0x1", 00:17:37.322 "workload": "randrw", 00:17:37.322 "percentage": 50, 00:17:37.322 "status": "finished", 00:17:37.322 "queue_depth": 2, 00:17:37.322 "io_size": 3145728, 00:17:37.322 "runtime": 8.137654, 00:17:37.322 "iops": 75.57460663724459, 00:17:37.322 "mibps": 226.72381991173376, 00:17:37.322 "io_failed": 0, 00:17:37.322 "io_timeout": 0, 00:17:37.322 "avg_latency_us": 19349.37341427912, 00:17:37.322 "min_latency_us": 372.0384279475983, 00:17:37.322 "max_latency_us": 116304.93624454149 00:17:37.322 } 00:17:37.322 ], 00:17:37.322 "core_count": 1 00:17:37.322 } 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.322 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:37.582 /dev/nbd0 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.582 1+0 records in 00:17:37.582 1+0 records out 00:17:37.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517153 s, 7.9 MB/s 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.582 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:37.841 /dev/nbd1 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.841 1+0 records in 00:17:37.841 1+0 records out 00:17:37.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728979 s, 5.6 MB/s 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.841 06:27:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.100 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:38.360 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:38.620 /dev/nbd1 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:38.620 1+0 records in 00:17:38.620 1+0 records out 00:17:38.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481573 s, 8.5 MB/s 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.620 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.880 06:27:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79295 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79295 ']' 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79295 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79295 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.140 killing process with pid 79295 00:17:39.140 Received shutdown signal, test time was about 10.122938 seconds 00:17:39.140 00:17:39.140 Latency(us) 00:17:39.140 [2024-11-26T06:27:23.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.140 [2024-11-26T06:27:23.277Z] =================================================================================================================== 00:17:39.140 [2024-11-26T06:27:23.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79295' 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79295 00:17:39.140 [2024-11-26 06:27:23.236721] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.140 06:27:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79295 00:17:39.707 [2024-11-26 06:27:23.727716] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:41.095 00:17:41.095 real 0m13.887s 00:17:41.095 user 0m17.270s 00:17:41.095 sys 0m2.105s 00:17:41.095 ************************************ 00:17:41.095 END TEST raid_rebuild_test_io 00:17:41.095 ************************************ 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.095 06:27:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:41.095 06:27:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:41.095 06:27:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.095 06:27:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.095 ************************************ 00:17:41.095 START TEST raid_rebuild_test_sb_io 00:17:41.095 ************************************ 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79718 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79718 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79718 ']' 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.095 06:27:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.095 [2024-11-26 06:27:25.201486] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:17:41.095 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:41.095 Zero copy mechanism will not be used. 00:17:41.095 [2024-11-26 06:27:25.201762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79718 ] 00:17:41.355 [2024-11-26 06:27:25.379850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.614 [2024-11-26 06:27:25.521692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.874 [2024-11-26 06:27:25.769467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.874 [2024-11-26 06:27:25.769549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 BaseBdev1_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 [2024-11-26 06:27:26.117167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:42.134 [2024-11-26 06:27:26.117246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.134 [2024-11-26 06:27:26.117275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:42.134 [2024-11-26 06:27:26.117288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.134 [2024-11-26 06:27:26.119814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.134 [2024-11-26 06:27:26.119857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.134 BaseBdev1 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 BaseBdev2_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 [2024-11-26 06:27:26.180305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:42.134 [2024-11-26 06:27:26.180484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.134 [2024-11-26 06:27:26.180520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:42.134 [2024-11-26 06:27:26.180539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.134 [2024-11-26 06:27:26.183287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.134 [2024-11-26 06:27:26.183326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:42.134 BaseBdev2 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 BaseBdev3_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.134 [2024-11-26 06:27:26.256770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:42.134 [2024-11-26 06:27:26.256843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.134 [2024-11-26 06:27:26.256882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:42.134 [2024-11-26 06:27:26.256895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.134 [2024-11-26 06:27:26.259366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.134 [2024-11-26 06:27:26.259408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:42.134 BaseBdev3 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.134 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 BaseBdev4_malloc 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 [2024-11-26 06:27:26.320602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:42.395 [2024-11-26 06:27:26.320665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.395 [2024-11-26 06:27:26.320686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:42.395 [2024-11-26 06:27:26.320698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.395 [2024-11-26 06:27:26.323237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.395 [2024-11-26 06:27:26.323278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:42.395 BaseBdev4 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 spare_malloc 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 spare_delay 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 [2024-11-26 06:27:26.394798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.395 [2024-11-26 06:27:26.394923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.395 [2024-11-26 06:27:26.394951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:42.395 [2024-11-26 06:27:26.394964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.395 [2024-11-26 06:27:26.397619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.395 [2024-11-26 06:27:26.397664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.395 spare 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 [2024-11-26 06:27:26.406836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.395 [2024-11-26 06:27:26.409048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.395 [2024-11-26 06:27:26.409132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:42.395 [2024-11-26 06:27:26.409201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:42.395 [2024-11-26 06:27:26.409407] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:42.395 [2024-11-26 06:27:26.409432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:42.395 [2024-11-26 06:27:26.409719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:42.395 [2024-11-26 06:27:26.409912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.395 [2024-11-26 06:27:26.409923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.395 [2024-11-26 06:27:26.410059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.395 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.395 "name": "raid_bdev1", 00:17:42.395 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:42.395 "strip_size_kb": 0, 00:17:42.395 "state": "online", 00:17:42.395 "raid_level": "raid1", 00:17:42.395 "superblock": true, 00:17:42.395 "num_base_bdevs": 4, 00:17:42.395 "num_base_bdevs_discovered": 4, 00:17:42.395 "num_base_bdevs_operational": 4, 00:17:42.395 "base_bdevs_list": [ 00:17:42.396 { 00:17:42.396 "name": "BaseBdev1", 00:17:42.396 "uuid": "990a5008-58b4-5672-a6d2-a46fdbd5cc5f", 00:17:42.396 "is_configured": true, 00:17:42.396 "data_offset": 2048, 00:17:42.396 "data_size": 63488 00:17:42.396 }, 00:17:42.396 { 00:17:42.396 "name": "BaseBdev2", 00:17:42.396 "uuid": "7308d18a-5e3b-5877-97d5-2e87e41d4fc1", 00:17:42.396 "is_configured": true, 00:17:42.396 "data_offset": 2048, 00:17:42.396 "data_size": 63488 00:17:42.396 }, 00:17:42.396 { 00:17:42.396 "name": "BaseBdev3", 00:17:42.396 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:42.396 "is_configured": true, 00:17:42.396 "data_offset": 2048, 00:17:42.396 "data_size": 63488 00:17:42.396 }, 00:17:42.396 { 00:17:42.396 "name": "BaseBdev4", 00:17:42.396 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:42.396 "is_configured": true, 00:17:42.396 "data_offset": 2048, 00:17:42.396 "data_size": 63488 00:17:42.396 } 00:17:42.396 ] 00:17:42.396 }' 00:17:42.396 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.396 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 [2024-11-26 06:27:26.846557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 [2024-11-26 06:27:26.917975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.964 "name": "raid_bdev1", 00:17:42.964 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:42.964 "strip_size_kb": 0, 00:17:42.964 "state": "online", 00:17:42.964 "raid_level": "raid1", 00:17:42.964 "superblock": true, 00:17:42.964 "num_base_bdevs": 4, 00:17:42.964 "num_base_bdevs_discovered": 3, 00:17:42.964 "num_base_bdevs_operational": 3, 00:17:42.964 "base_bdevs_list": [ 00:17:42.964 { 00:17:42.964 "name": null, 00:17:42.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.964 "is_configured": false, 00:17:42.964 "data_offset": 0, 00:17:42.964 "data_size": 63488 00:17:42.964 }, 00:17:42.964 { 00:17:42.964 "name": "BaseBdev2", 00:17:42.964 "uuid": "7308d18a-5e3b-5877-97d5-2e87e41d4fc1", 00:17:42.964 "is_configured": true, 00:17:42.964 "data_offset": 2048, 00:17:42.964 "data_size": 63488 00:17:42.964 }, 00:17:42.964 { 00:17:42.964 "name": "BaseBdev3", 00:17:42.964 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:42.964 "is_configured": true, 00:17:42.964 "data_offset": 2048, 00:17:42.964 "data_size": 63488 00:17:42.964 }, 00:17:42.964 { 00:17:42.964 "name": "BaseBdev4", 00:17:42.964 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:42.964 "is_configured": true, 00:17:42.964 "data_offset": 2048, 00:17:42.964 "data_size": 63488 00:17:42.964 } 00:17:42.964 ] 00:17:42.964 }' 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.964 06:27:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.964 [2024-11-26 06:27:27.027336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:42.964 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:42.964 Zero copy mechanism will not be used. 00:17:42.964 Running I/O for 60 seconds... 00:17:43.530 06:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:43.530 06:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.530 06:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.530 [2024-11-26 06:27:27.395327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.530 06:27:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.531 06:27:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:43.531 [2024-11-26 06:27:27.455002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:43.531 [2024-11-26 06:27:27.457481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.531 [2024-11-26 06:27:27.577222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:43.531 [2024-11-26 06:27:27.578401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:43.790 [2024-11-26 06:27:27.826878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:43.790 [2024-11-26 06:27:27.827589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:44.049 166.00 IOPS, 498.00 MiB/s [2024-11-26T06:27:28.186Z] [2024-11-26 06:27:28.100158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:44.307 [2024-11-26 06:27:28.236931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:44.307 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.307 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.307 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.566 "name": "raid_bdev1", 00:17:44.566 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:44.566 "strip_size_kb": 0, 00:17:44.566 "state": "online", 00:17:44.566 "raid_level": "raid1", 00:17:44.566 "superblock": true, 00:17:44.566 "num_base_bdevs": 4, 00:17:44.566 "num_base_bdevs_discovered": 4, 00:17:44.566 "num_base_bdevs_operational": 4, 00:17:44.566 "process": { 00:17:44.566 "type": "rebuild", 00:17:44.566 "target": "spare", 00:17:44.566 "progress": { 00:17:44.566 "blocks": 10240, 00:17:44.566 "percent": 16 00:17:44.566 } 00:17:44.566 }, 00:17:44.566 "base_bdevs_list": [ 00:17:44.566 { 00:17:44.566 "name": "spare", 00:17:44.566 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:44.566 "is_configured": true, 00:17:44.566 "data_offset": 2048, 00:17:44.566 "data_size": 63488 00:17:44.566 }, 00:17:44.566 { 00:17:44.566 "name": "BaseBdev2", 00:17:44.566 "uuid": "7308d18a-5e3b-5877-97d5-2e87e41d4fc1", 00:17:44.566 "is_configured": true, 00:17:44.566 "data_offset": 2048, 00:17:44.566 "data_size": 63488 00:17:44.566 }, 00:17:44.566 { 00:17:44.566 "name": "BaseBdev3", 00:17:44.566 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:44.566 "is_configured": true, 00:17:44.566 "data_offset": 2048, 00:17:44.566 "data_size": 63488 00:17:44.566 }, 00:17:44.566 { 00:17:44.566 "name": "BaseBdev4", 00:17:44.566 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:44.566 "is_configured": true, 00:17:44.566 "data_offset": 2048, 00:17:44.566 "data_size": 63488 00:17:44.566 } 00:17:44.566 ] 00:17:44.566 }' 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.566 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.566 [2024-11-26 06:27:28.575813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.566 [2024-11-26 06:27:28.633321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:44.825 [2024-11-26 06:27:28.749777] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.825 [2024-11-26 06:27:28.766134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.825 [2024-11-26 06:27:28.766256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.825 [2024-11-26 06:27:28.766275] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.825 [2024-11-26 06:27:28.800834] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.825 "name": "raid_bdev1", 00:17:44.825 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:44.825 "strip_size_kb": 0, 00:17:44.825 "state": "online", 00:17:44.825 "raid_level": "raid1", 00:17:44.825 "superblock": true, 00:17:44.825 "num_base_bdevs": 4, 00:17:44.825 "num_base_bdevs_discovered": 3, 00:17:44.825 "num_base_bdevs_operational": 3, 00:17:44.825 "base_bdevs_list": [ 00:17:44.825 { 00:17:44.825 "name": null, 00:17:44.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.825 "is_configured": false, 00:17:44.825 "data_offset": 0, 00:17:44.825 "data_size": 63488 00:17:44.825 }, 00:17:44.825 { 00:17:44.825 "name": "BaseBdev2", 00:17:44.825 "uuid": "7308d18a-5e3b-5877-97d5-2e87e41d4fc1", 00:17:44.825 "is_configured": true, 00:17:44.825 "data_offset": 2048, 00:17:44.825 "data_size": 63488 00:17:44.825 }, 00:17:44.825 { 00:17:44.825 "name": "BaseBdev3", 00:17:44.825 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:44.825 "is_configured": true, 00:17:44.825 "data_offset": 2048, 00:17:44.825 "data_size": 63488 00:17:44.825 }, 00:17:44.825 { 00:17:44.825 "name": "BaseBdev4", 00:17:44.825 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:44.825 "is_configured": true, 00:17:44.825 "data_offset": 2048, 00:17:44.825 "data_size": 63488 00:17:44.825 } 00:17:44.825 ] 00:17:44.825 }' 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.825 06:27:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.343 118.50 IOPS, 355.50 MiB/s [2024-11-26T06:27:29.480Z] 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.343 "name": "raid_bdev1", 00:17:45.343 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:45.343 "strip_size_kb": 0, 00:17:45.343 "state": "online", 00:17:45.343 "raid_level": "raid1", 00:17:45.343 "superblock": true, 00:17:45.343 "num_base_bdevs": 4, 00:17:45.343 "num_base_bdevs_discovered": 3, 00:17:45.343 "num_base_bdevs_operational": 3, 00:17:45.343 "base_bdevs_list": [ 00:17:45.343 { 00:17:45.343 "name": null, 00:17:45.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.343 "is_configured": false, 00:17:45.343 "data_offset": 0, 00:17:45.343 "data_size": 63488 00:17:45.343 }, 00:17:45.343 { 00:17:45.343 "name": "BaseBdev2", 00:17:45.343 "uuid": "7308d18a-5e3b-5877-97d5-2e87e41d4fc1", 00:17:45.343 "is_configured": true, 00:17:45.343 "data_offset": 2048, 00:17:45.343 "data_size": 63488 00:17:45.343 }, 00:17:45.343 { 00:17:45.343 "name": "BaseBdev3", 00:17:45.343 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:45.343 "is_configured": true, 00:17:45.343 "data_offset": 2048, 00:17:45.343 "data_size": 63488 00:17:45.343 }, 00:17:45.343 { 00:17:45.343 "name": "BaseBdev4", 00:17:45.343 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:45.343 "is_configured": true, 00:17:45.343 "data_offset": 2048, 00:17:45.343 "data_size": 63488 00:17:45.343 } 00:17:45.343 ] 00:17:45.343 }' 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.343 [2024-11-26 06:27:29.399292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.343 06:27:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:45.343 [2024-11-26 06:27:29.467023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:45.343 [2024-11-26 06:27:29.469563] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.602 [2024-11-26 06:27:29.590049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:45.602 [2024-11-26 06:27:29.592461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:45.861 [2024-11-26 06:27:29.816545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:45.861 [2024-11-26 06:27:29.817061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:46.120 124.00 IOPS, 372.00 MiB/s [2024-11-26T06:27:30.257Z] [2024-11-26 06:27:30.181007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:46.120 [2024-11-26 06:27:30.183603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:46.378 [2024-11-26 06:27:30.427725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.378 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.378 "name": "raid_bdev1", 00:17:46.378 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:46.378 "strip_size_kb": 0, 00:17:46.378 "state": "online", 00:17:46.378 "raid_level": "raid1", 00:17:46.378 "superblock": true, 00:17:46.378 "num_base_bdevs": 4, 00:17:46.378 "num_base_bdevs_discovered": 4, 00:17:46.378 "num_base_bdevs_operational": 4, 00:17:46.378 "process": { 00:17:46.378 "type": "rebuild", 00:17:46.378 "target": "spare", 00:17:46.378 "progress": { 00:17:46.378 "blocks": 10240, 00:17:46.378 "percent": 16 00:17:46.378 } 00:17:46.378 }, 00:17:46.378 "base_bdevs_list": [ 00:17:46.378 { 00:17:46.378 "name": "spare", 00:17:46.378 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:46.378 "is_configured": true, 00:17:46.378 "data_offset": 2048, 00:17:46.378 "data_size": 63488 00:17:46.378 }, 00:17:46.378 { 00:17:46.378 "name": "BaseBdev2", 00:17:46.378 "uuid": "7308d18a-5e3b-5877-97d5-2e87e41d4fc1", 00:17:46.378 "is_configured": true, 00:17:46.378 "data_offset": 2048, 00:17:46.378 "data_size": 63488 00:17:46.378 }, 00:17:46.378 { 00:17:46.378 "name": "BaseBdev3", 00:17:46.378 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:46.378 "is_configured": true, 00:17:46.378 "data_offset": 2048, 00:17:46.378 "data_size": 63488 00:17:46.378 }, 00:17:46.378 { 00:17:46.378 "name": "BaseBdev4", 00:17:46.378 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:46.378 "is_configured": true, 00:17:46.378 "data_offset": 2048, 00:17:46.378 "data_size": 63488 00:17:46.378 } 00:17:46.378 ] 00:17:46.378 }' 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:46.636 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.636 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.636 [2024-11-26 06:27:30.612894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:46.636 [2024-11-26 06:27:30.652555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:46.894 [2024-11-26 06:27:30.862465] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:46.895 [2024-11-26 06:27:30.862648] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.895 "name": "raid_bdev1", 00:17:46.895 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:46.895 "strip_size_kb": 0, 00:17:46.895 "state": "online", 00:17:46.895 "raid_level": "raid1", 00:17:46.895 "superblock": true, 00:17:46.895 "num_base_bdevs": 4, 00:17:46.895 "num_base_bdevs_discovered": 3, 00:17:46.895 "num_base_bdevs_operational": 3, 00:17:46.895 "process": { 00:17:46.895 "type": "rebuild", 00:17:46.895 "target": "spare", 00:17:46.895 "progress": { 00:17:46.895 "blocks": 14336, 00:17:46.895 "percent": 22 00:17:46.895 } 00:17:46.895 }, 00:17:46.895 "base_bdevs_list": [ 00:17:46.895 { 00:17:46.895 "name": "spare", 00:17:46.895 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:46.895 "is_configured": true, 00:17:46.895 "data_offset": 2048, 00:17:46.895 "data_size": 63488 00:17:46.895 }, 00:17:46.895 { 00:17:46.895 "name": null, 00:17:46.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.895 "is_configured": false, 00:17:46.895 "data_offset": 0, 00:17:46.895 "data_size": 63488 00:17:46.895 }, 00:17:46.895 { 00:17:46.895 "name": "BaseBdev3", 00:17:46.895 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:46.895 "is_configured": true, 00:17:46.895 "data_offset": 2048, 00:17:46.895 "data_size": 63488 00:17:46.895 }, 00:17:46.895 { 00:17:46.895 "name": "BaseBdev4", 00:17:46.895 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:46.895 "is_configured": true, 00:17:46.895 "data_offset": 2048, 00:17:46.895 "data_size": 63488 00:17:46.895 } 00:17:46.895 ] 00:17:46.895 }' 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.895 06:27:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.895 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.153 115.00 IOPS, 345.00 MiB/s [2024-11-26T06:27:31.290Z] 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.153 "name": "raid_bdev1", 00:17:47.153 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:47.153 "strip_size_kb": 0, 00:17:47.153 "state": "online", 00:17:47.153 "raid_level": "raid1", 00:17:47.153 "superblock": true, 00:17:47.153 "num_base_bdevs": 4, 00:17:47.153 "num_base_bdevs_discovered": 3, 00:17:47.153 "num_base_bdevs_operational": 3, 00:17:47.153 "process": { 00:17:47.153 "type": "rebuild", 00:17:47.153 "target": "spare", 00:17:47.153 "progress": { 00:17:47.153 "blocks": 16384, 00:17:47.153 "percent": 25 00:17:47.153 } 00:17:47.153 }, 00:17:47.153 "base_bdevs_list": [ 00:17:47.153 { 00:17:47.153 "name": "spare", 00:17:47.153 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:47.153 "is_configured": true, 00:17:47.153 "data_offset": 2048, 00:17:47.153 "data_size": 63488 00:17:47.153 }, 00:17:47.153 { 00:17:47.153 "name": null, 00:17:47.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.153 "is_configured": false, 00:17:47.153 "data_offset": 0, 00:17:47.153 "data_size": 63488 00:17:47.153 }, 00:17:47.153 { 00:17:47.153 "name": "BaseBdev3", 00:17:47.153 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:47.153 "is_configured": true, 00:17:47.153 "data_offset": 2048, 00:17:47.153 "data_size": 63488 00:17:47.153 }, 00:17:47.153 { 00:17:47.153 "name": "BaseBdev4", 00:17:47.153 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:47.153 "is_configured": true, 00:17:47.153 "data_offset": 2048, 00:17:47.153 "data_size": 63488 00:17:47.153 } 00:17:47.153 ] 00:17:47.153 }' 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.153 06:27:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.153 [2024-11-26 06:27:31.187190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:47.153 [2024-11-26 06:27:31.189055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:47.721 [2024-11-26 06:27:31.614737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:47.721 [2024-11-26 06:27:31.616649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:48.240 104.00 IOPS, 312.00 MiB/s [2024-11-26T06:27:32.377Z] 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.240 "name": "raid_bdev1", 00:17:48.240 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:48.240 "strip_size_kb": 0, 00:17:48.240 "state": "online", 00:17:48.240 "raid_level": "raid1", 00:17:48.240 "superblock": true, 00:17:48.240 "num_base_bdevs": 4, 00:17:48.240 "num_base_bdevs_discovered": 3, 00:17:48.240 "num_base_bdevs_operational": 3, 00:17:48.240 "process": { 00:17:48.240 "type": "rebuild", 00:17:48.240 "target": "spare", 00:17:48.240 "progress": { 00:17:48.240 "blocks": 32768, 00:17:48.240 "percent": 51 00:17:48.240 } 00:17:48.240 }, 00:17:48.240 "base_bdevs_list": [ 00:17:48.240 { 00:17:48.240 "name": "spare", 00:17:48.240 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": null, 00:17:48.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.240 "is_configured": false, 00:17:48.240 "data_offset": 0, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": "BaseBdev3", 00:17:48.240 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 }, 00:17:48.240 { 00:17:48.240 "name": "BaseBdev4", 00:17:48.240 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:48.240 "is_configured": true, 00:17:48.240 "data_offset": 2048, 00:17:48.240 "data_size": 63488 00:17:48.240 } 00:17:48.240 ] 00:17:48.240 }' 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.240 06:27:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:48.806 [2024-11-26 06:27:32.797216] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:48.806 [2024-11-26 06:27:32.797861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:49.065 [2024-11-26 06:27:33.018804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:49.322 94.00 IOPS, 282.00 MiB/s [2024-11-26T06:27:33.459Z] [2024-11-26 06:27:33.247372] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:49.322 [2024-11-26 06:27:33.248948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.322 "name": "raid_bdev1", 00:17:49.322 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:49.322 "strip_size_kb": 0, 00:17:49.322 "state": "online", 00:17:49.322 "raid_level": "raid1", 00:17:49.322 "superblock": true, 00:17:49.322 "num_base_bdevs": 4, 00:17:49.322 "num_base_bdevs_discovered": 3, 00:17:49.322 "num_base_bdevs_operational": 3, 00:17:49.322 "process": { 00:17:49.322 "type": "rebuild", 00:17:49.322 "target": "spare", 00:17:49.322 "progress": { 00:17:49.322 "blocks": 51200, 00:17:49.322 "percent": 80 00:17:49.322 } 00:17:49.322 }, 00:17:49.322 "base_bdevs_list": [ 00:17:49.322 { 00:17:49.322 "name": "spare", 00:17:49.322 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:49.322 "is_configured": true, 00:17:49.322 "data_offset": 2048, 00:17:49.322 "data_size": 63488 00:17:49.322 }, 00:17:49.322 { 00:17:49.322 "name": null, 00:17:49.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.322 "is_configured": false, 00:17:49.322 "data_offset": 0, 00:17:49.322 "data_size": 63488 00:17:49.322 }, 00:17:49.322 { 00:17:49.322 "name": "BaseBdev3", 00:17:49.322 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:49.322 "is_configured": true, 00:17:49.322 "data_offset": 2048, 00:17:49.322 "data_size": 63488 00:17:49.322 }, 00:17:49.322 { 00:17:49.322 "name": "BaseBdev4", 00:17:49.322 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:49.322 "is_configured": true, 00:17:49.322 "data_offset": 2048, 00:17:49.322 "data_size": 63488 00:17:49.322 } 00:17:49.322 ] 00:17:49.322 }' 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.322 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.581 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.581 06:27:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:50.147 [2024-11-26 06:27:34.019727] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:50.147 85.00 IOPS, 255.00 MiB/s [2024-11-26T06:27:34.284Z] [2024-11-26 06:27:34.125406] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:50.147 [2024-11-26 06:27:34.130647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.406 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.406 "name": "raid_bdev1", 00:17:50.406 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:50.406 "strip_size_kb": 0, 00:17:50.406 "state": "online", 00:17:50.406 "raid_level": "raid1", 00:17:50.406 "superblock": true, 00:17:50.406 "num_base_bdevs": 4, 00:17:50.406 "num_base_bdevs_discovered": 3, 00:17:50.406 "num_base_bdevs_operational": 3, 00:17:50.406 "base_bdevs_list": [ 00:17:50.406 { 00:17:50.406 "name": "spare", 00:17:50.406 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:50.406 "is_configured": true, 00:17:50.406 "data_offset": 2048, 00:17:50.406 "data_size": 63488 00:17:50.406 }, 00:17:50.406 { 00:17:50.406 "name": null, 00:17:50.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.406 "is_configured": false, 00:17:50.406 "data_offset": 0, 00:17:50.406 "data_size": 63488 00:17:50.406 }, 00:17:50.406 { 00:17:50.406 "name": "BaseBdev3", 00:17:50.406 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:50.406 "is_configured": true, 00:17:50.406 "data_offset": 2048, 00:17:50.406 "data_size": 63488 00:17:50.406 }, 00:17:50.406 { 00:17:50.406 "name": "BaseBdev4", 00:17:50.406 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:50.407 "is_configured": true, 00:17:50.407 "data_offset": 2048, 00:17:50.407 "data_size": 63488 00:17:50.407 } 00:17:50.407 ] 00:17:50.407 }' 00:17:50.407 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.666 "name": "raid_bdev1", 00:17:50.666 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:50.666 "strip_size_kb": 0, 00:17:50.666 "state": "online", 00:17:50.666 "raid_level": "raid1", 00:17:50.666 "superblock": true, 00:17:50.666 "num_base_bdevs": 4, 00:17:50.666 "num_base_bdevs_discovered": 3, 00:17:50.666 "num_base_bdevs_operational": 3, 00:17:50.666 "base_bdevs_list": [ 00:17:50.666 { 00:17:50.666 "name": "spare", 00:17:50.666 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:50.666 "is_configured": true, 00:17:50.666 "data_offset": 2048, 00:17:50.666 "data_size": 63488 00:17:50.666 }, 00:17:50.666 { 00:17:50.666 "name": null, 00:17:50.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.666 "is_configured": false, 00:17:50.666 "data_offset": 0, 00:17:50.666 "data_size": 63488 00:17:50.666 }, 00:17:50.666 { 00:17:50.666 "name": "BaseBdev3", 00:17:50.666 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:50.666 "is_configured": true, 00:17:50.666 "data_offset": 2048, 00:17:50.666 "data_size": 63488 00:17:50.666 }, 00:17:50.666 { 00:17:50.666 "name": "BaseBdev4", 00:17:50.666 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:50.666 "is_configured": true, 00:17:50.666 "data_offset": 2048, 00:17:50.666 "data_size": 63488 00:17:50.666 } 00:17:50.666 ] 00:17:50.666 }' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:50.666 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.926 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.926 "name": "raid_bdev1", 00:17:50.926 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:50.926 "strip_size_kb": 0, 00:17:50.926 "state": "online", 00:17:50.926 "raid_level": "raid1", 00:17:50.926 "superblock": true, 00:17:50.926 "num_base_bdevs": 4, 00:17:50.926 "num_base_bdevs_discovered": 3, 00:17:50.926 "num_base_bdevs_operational": 3, 00:17:50.926 "base_bdevs_list": [ 00:17:50.926 { 00:17:50.926 "name": "spare", 00:17:50.926 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:50.926 "is_configured": true, 00:17:50.926 "data_offset": 2048, 00:17:50.926 "data_size": 63488 00:17:50.926 }, 00:17:50.926 { 00:17:50.926 "name": null, 00:17:50.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.926 "is_configured": false, 00:17:50.926 "data_offset": 0, 00:17:50.926 "data_size": 63488 00:17:50.926 }, 00:17:50.926 { 00:17:50.926 "name": "BaseBdev3", 00:17:50.926 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:50.926 "is_configured": true, 00:17:50.926 "data_offset": 2048, 00:17:50.926 "data_size": 63488 00:17:50.926 }, 00:17:50.926 { 00:17:50.926 "name": "BaseBdev4", 00:17:50.926 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:50.926 "is_configured": true, 00:17:50.926 "data_offset": 2048, 00:17:50.926 "data_size": 63488 00:17:50.926 } 00:17:50.926 ] 00:17:50.926 }' 00:17:50.926 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.926 06:27:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.185 78.25 IOPS, 234.75 MiB/s [2024-11-26T06:27:35.322Z] 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.185 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.185 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.185 [2024-11-26 06:27:35.205704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.185 [2024-11-26 06:27:35.205750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.185 00:17:51.185 Latency(us) 00:17:51.185 [2024-11-26T06:27:35.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.185 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:51.185 raid_bdev1 : 8.24 76.74 230.22 0.00 0.00 18879.01 377.40 122715.44 00:17:51.185 [2024-11-26T06:27:35.322Z] =================================================================================================================== 00:17:51.185 [2024-11-26T06:27:35.322Z] Total : 76.74 230.22 0.00 0.00 18879.01 377.40 122715.44 00:17:51.185 [2024-11-26 06:27:35.274526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.185 [2024-11-26 06:27:35.274600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.185 [2024-11-26 06:27:35.274731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.185 [2024-11-26 06:27:35.274747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:51.185 { 00:17:51.185 "results": [ 00:17:51.185 { 00:17:51.185 "job": "raid_bdev1", 00:17:51.185 "core_mask": "0x1", 00:17:51.185 "workload": "randrw", 00:17:51.185 "percentage": 50, 00:17:51.185 "status": "finished", 00:17:51.185 "queue_depth": 2, 00:17:51.185 "io_size": 3145728, 00:17:51.185 "runtime": 8.235464, 00:17:51.185 "iops": 76.74127408000327, 00:17:51.185 "mibps": 230.22382224000978, 00:17:51.185 "io_failed": 0, 00:17:51.185 "io_timeout": 0, 00:17:51.185 "avg_latency_us": 18879.005914543144, 00:17:51.185 "min_latency_us": 377.40436681222707, 00:17:51.185 "max_latency_us": 122715.44454148471 00:17:51.185 } 00:17:51.185 ], 00:17:51.185 "core_count": 1 00:17:51.185 } 00:17:51.185 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.185 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.185 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.186 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:51.186 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:51.186 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.445 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:51.445 /dev/nbd0 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.704 1+0 records in 00:17:51.704 1+0 records out 00:17:51.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383517 s, 10.7 MB/s 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.704 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:51.964 /dev/nbd1 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:51.964 1+0 records in 00:17:51.964 1+0 records out 00:17:51.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328113 s, 12.5 MB/s 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.964 06:27:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:51.964 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:52.224 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:52.484 /dev/nbd1 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:52.484 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:52.484 1+0 records in 00:17:52.484 1+0 records out 00:17:52.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267626 s, 15.3 MB/s 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:52.743 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:53.001 06:27:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.282 [2024-11-26 06:27:37.219738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.282 [2024-11-26 06:27:37.219816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.282 [2024-11-26 06:27:37.219845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:53.282 [2024-11-26 06:27:37.219859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.282 [2024-11-26 06:27:37.222745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.282 [2024-11-26 06:27:37.222791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.282 [2024-11-26 06:27:37.222905] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:53.282 [2024-11-26 06:27:37.222973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:53.282 [2024-11-26 06:27:37.223130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.282 [2024-11-26 06:27:37.223235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:53.282 spare 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:53.282 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.283 [2024-11-26 06:27:37.323216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:53.283 [2024-11-26 06:27:37.323285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:53.283 [2024-11-26 06:27:37.323739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:53.283 [2024-11-26 06:27:37.324013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:53.283 [2024-11-26 06:27:37.324026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:53.283 [2024-11-26 06:27:37.324327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.283 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.283 "name": "raid_bdev1", 00:17:53.283 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:53.283 "strip_size_kb": 0, 00:17:53.283 "state": "online", 00:17:53.283 "raid_level": "raid1", 00:17:53.283 "superblock": true, 00:17:53.283 "num_base_bdevs": 4, 00:17:53.283 "num_base_bdevs_discovered": 3, 00:17:53.283 "num_base_bdevs_operational": 3, 00:17:53.283 "base_bdevs_list": [ 00:17:53.283 { 00:17:53.283 "name": "spare", 00:17:53.283 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:53.283 "is_configured": true, 00:17:53.283 "data_offset": 2048, 00:17:53.283 "data_size": 63488 00:17:53.283 }, 00:17:53.283 { 00:17:53.283 "name": null, 00:17:53.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.283 "is_configured": false, 00:17:53.283 "data_offset": 2048, 00:17:53.283 "data_size": 63488 00:17:53.283 }, 00:17:53.283 { 00:17:53.283 "name": "BaseBdev3", 00:17:53.283 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:53.283 "is_configured": true, 00:17:53.283 "data_offset": 2048, 00:17:53.284 "data_size": 63488 00:17:53.284 }, 00:17:53.284 { 00:17:53.284 "name": "BaseBdev4", 00:17:53.284 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:53.284 "is_configured": true, 00:17:53.284 "data_offset": 2048, 00:17:53.284 "data_size": 63488 00:17:53.284 } 00:17:53.284 ] 00:17:53.284 }' 00:17:53.284 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.284 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.856 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.856 "name": "raid_bdev1", 00:17:53.856 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:53.856 "strip_size_kb": 0, 00:17:53.856 "state": "online", 00:17:53.856 "raid_level": "raid1", 00:17:53.856 "superblock": true, 00:17:53.856 "num_base_bdevs": 4, 00:17:53.856 "num_base_bdevs_discovered": 3, 00:17:53.856 "num_base_bdevs_operational": 3, 00:17:53.856 "base_bdevs_list": [ 00:17:53.856 { 00:17:53.856 "name": "spare", 00:17:53.856 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:53.856 "is_configured": true, 00:17:53.856 "data_offset": 2048, 00:17:53.856 "data_size": 63488 00:17:53.856 }, 00:17:53.856 { 00:17:53.856 "name": null, 00:17:53.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.856 "is_configured": false, 00:17:53.856 "data_offset": 2048, 00:17:53.856 "data_size": 63488 00:17:53.856 }, 00:17:53.856 { 00:17:53.856 "name": "BaseBdev3", 00:17:53.857 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:53.857 "is_configured": true, 00:17:53.857 "data_offset": 2048, 00:17:53.857 "data_size": 63488 00:17:53.857 }, 00:17:53.857 { 00:17:53.857 "name": "BaseBdev4", 00:17:53.857 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:53.857 "is_configured": true, 00:17:53.857 "data_offset": 2048, 00:17:53.857 "data_size": 63488 00:17:53.857 } 00:17:53.857 ] 00:17:53.857 }' 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.857 [2024-11-26 06:27:37.963300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.857 06:27:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.115 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.115 "name": "raid_bdev1", 00:17:54.115 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:54.115 "strip_size_kb": 0, 00:17:54.115 "state": "online", 00:17:54.115 "raid_level": "raid1", 00:17:54.115 "superblock": true, 00:17:54.115 "num_base_bdevs": 4, 00:17:54.115 "num_base_bdevs_discovered": 2, 00:17:54.115 "num_base_bdevs_operational": 2, 00:17:54.115 "base_bdevs_list": [ 00:17:54.115 { 00:17:54.115 "name": null, 00:17:54.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.115 "is_configured": false, 00:17:54.115 "data_offset": 0, 00:17:54.115 "data_size": 63488 00:17:54.115 }, 00:17:54.115 { 00:17:54.115 "name": null, 00:17:54.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.115 "is_configured": false, 00:17:54.115 "data_offset": 2048, 00:17:54.115 "data_size": 63488 00:17:54.115 }, 00:17:54.115 { 00:17:54.115 "name": "BaseBdev3", 00:17:54.115 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:54.115 "is_configured": true, 00:17:54.115 "data_offset": 2048, 00:17:54.115 "data_size": 63488 00:17:54.115 }, 00:17:54.115 { 00:17:54.115 "name": "BaseBdev4", 00:17:54.115 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:54.115 "is_configured": true, 00:17:54.115 "data_offset": 2048, 00:17:54.115 "data_size": 63488 00:17:54.115 } 00:17:54.115 ] 00:17:54.115 }' 00:17:54.115 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.115 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.373 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:54.373 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.373 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:54.373 [2024-11-26 06:27:38.410622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.373 [2024-11-26 06:27:38.410876] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:54.373 [2024-11-26 06:27:38.410897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:54.373 [2024-11-26 06:27:38.410948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:54.373 [2024-11-26 06:27:38.427951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:54.373 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.373 06:27:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:54.373 [2024-11-26 06:27:38.430419] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.309 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.569 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.569 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.569 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.569 "name": "raid_bdev1", 00:17:55.569 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:55.569 "strip_size_kb": 0, 00:17:55.569 "state": "online", 00:17:55.569 "raid_level": "raid1", 00:17:55.569 "superblock": true, 00:17:55.569 "num_base_bdevs": 4, 00:17:55.569 "num_base_bdevs_discovered": 3, 00:17:55.569 "num_base_bdevs_operational": 3, 00:17:55.569 "process": { 00:17:55.569 "type": "rebuild", 00:17:55.569 "target": "spare", 00:17:55.569 "progress": { 00:17:55.569 "blocks": 20480, 00:17:55.569 "percent": 32 00:17:55.569 } 00:17:55.569 }, 00:17:55.569 "base_bdevs_list": [ 00:17:55.569 { 00:17:55.569 "name": "spare", 00:17:55.569 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:55.569 "is_configured": true, 00:17:55.569 "data_offset": 2048, 00:17:55.569 "data_size": 63488 00:17:55.569 }, 00:17:55.569 { 00:17:55.569 "name": null, 00:17:55.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.569 "is_configured": false, 00:17:55.569 "data_offset": 2048, 00:17:55.569 "data_size": 63488 00:17:55.569 }, 00:17:55.569 { 00:17:55.569 "name": "BaseBdev3", 00:17:55.569 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:55.569 "is_configured": true, 00:17:55.569 "data_offset": 2048, 00:17:55.569 "data_size": 63488 00:17:55.569 }, 00:17:55.569 { 00:17:55.569 "name": "BaseBdev4", 00:17:55.569 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:55.569 "is_configured": true, 00:17:55.569 "data_offset": 2048, 00:17:55.570 "data_size": 63488 00:17:55.570 } 00:17:55.570 ] 00:17:55.570 }' 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 [2024-11-26 06:27:39.586827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:55.570 [2024-11-26 06:27:39.640591] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:55.570 [2024-11-26 06:27:39.640674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.570 [2024-11-26 06:27:39.640691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:55.570 [2024-11-26 06:27:39.640706] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:55.570 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.829 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.829 "name": "raid_bdev1", 00:17:55.829 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:55.829 "strip_size_kb": 0, 00:17:55.829 "state": "online", 00:17:55.829 "raid_level": "raid1", 00:17:55.829 "superblock": true, 00:17:55.829 "num_base_bdevs": 4, 00:17:55.829 "num_base_bdevs_discovered": 2, 00:17:55.829 "num_base_bdevs_operational": 2, 00:17:55.829 "base_bdevs_list": [ 00:17:55.829 { 00:17:55.829 "name": null, 00:17:55.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.829 "is_configured": false, 00:17:55.829 "data_offset": 0, 00:17:55.829 "data_size": 63488 00:17:55.829 }, 00:17:55.829 { 00:17:55.829 "name": null, 00:17:55.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.829 "is_configured": false, 00:17:55.829 "data_offset": 2048, 00:17:55.829 "data_size": 63488 00:17:55.829 }, 00:17:55.829 { 00:17:55.829 "name": "BaseBdev3", 00:17:55.829 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:55.829 "is_configured": true, 00:17:55.829 "data_offset": 2048, 00:17:55.829 "data_size": 63488 00:17:55.829 }, 00:17:55.829 { 00:17:55.829 "name": "BaseBdev4", 00:17:55.829 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:55.829 "is_configured": true, 00:17:55.829 "data_offset": 2048, 00:17:55.829 "data_size": 63488 00:17:55.829 } 00:17:55.829 ] 00:17:55.829 }' 00:17:55.829 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.829 06:27:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.088 06:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.088 06:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.088 06:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:56.088 [2024-11-26 06:27:40.103001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.088 [2024-11-26 06:27:40.103200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.088 [2024-11-26 06:27:40.103285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:56.088 [2024-11-26 06:27:40.103343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.088 [2024-11-26 06:27:40.104014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.088 [2024-11-26 06:27:40.104124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.088 [2024-11-26 06:27:40.104302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:56.088 [2024-11-26 06:27:40.104364] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:56.088 [2024-11-26 06:27:40.104474] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:56.088 [2024-11-26 06:27:40.104560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.088 [2024-11-26 06:27:40.120784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:56.088 spare 00:17:56.088 06:27:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.088 06:27:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:56.088 [2024-11-26 06:27:40.123294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.023 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.282 "name": "raid_bdev1", 00:17:57.282 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:57.282 "strip_size_kb": 0, 00:17:57.282 "state": "online", 00:17:57.282 "raid_level": "raid1", 00:17:57.282 "superblock": true, 00:17:57.282 "num_base_bdevs": 4, 00:17:57.282 "num_base_bdevs_discovered": 3, 00:17:57.282 "num_base_bdevs_operational": 3, 00:17:57.282 "process": { 00:17:57.282 "type": "rebuild", 00:17:57.282 "target": "spare", 00:17:57.282 "progress": { 00:17:57.282 "blocks": 20480, 00:17:57.282 "percent": 32 00:17:57.282 } 00:17:57.282 }, 00:17:57.282 "base_bdevs_list": [ 00:17:57.282 { 00:17:57.282 "name": "spare", 00:17:57.282 "uuid": "8d73715c-4e06-57c3-874c-42135076b995", 00:17:57.282 "is_configured": true, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 }, 00:17:57.282 { 00:17:57.282 "name": null, 00:17:57.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.282 "is_configured": false, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 }, 00:17:57.282 { 00:17:57.282 "name": "BaseBdev3", 00:17:57.282 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:57.282 "is_configured": true, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 }, 00:17:57.282 { 00:17:57.282 "name": "BaseBdev4", 00:17:57.282 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:57.282 "is_configured": true, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 } 00:17:57.282 ] 00:17:57.282 }' 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.282 [2024-11-26 06:27:41.287043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.282 [2024-11-26 06:27:41.333663] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.282 [2024-11-26 06:27:41.333779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.282 [2024-11-26 06:27:41.333806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.282 [2024-11-26 06:27:41.333814] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.282 "name": "raid_bdev1", 00:17:57.282 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:57.282 "strip_size_kb": 0, 00:17:57.282 "state": "online", 00:17:57.282 "raid_level": "raid1", 00:17:57.282 "superblock": true, 00:17:57.282 "num_base_bdevs": 4, 00:17:57.282 "num_base_bdevs_discovered": 2, 00:17:57.282 "num_base_bdevs_operational": 2, 00:17:57.282 "base_bdevs_list": [ 00:17:57.282 { 00:17:57.282 "name": null, 00:17:57.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.282 "is_configured": false, 00:17:57.282 "data_offset": 0, 00:17:57.282 "data_size": 63488 00:17:57.282 }, 00:17:57.282 { 00:17:57.282 "name": null, 00:17:57.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.282 "is_configured": false, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 }, 00:17:57.282 { 00:17:57.282 "name": "BaseBdev3", 00:17:57.282 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:57.282 "is_configured": true, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 }, 00:17:57.282 { 00:17:57.282 "name": "BaseBdev4", 00:17:57.282 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:57.282 "is_configured": true, 00:17:57.282 "data_offset": 2048, 00:17:57.282 "data_size": 63488 00:17:57.282 } 00:17:57.282 ] 00:17:57.282 }' 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.282 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.874 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.874 "name": "raid_bdev1", 00:17:57.874 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:57.874 "strip_size_kb": 0, 00:17:57.874 "state": "online", 00:17:57.874 "raid_level": "raid1", 00:17:57.874 "superblock": true, 00:17:57.874 "num_base_bdevs": 4, 00:17:57.874 "num_base_bdevs_discovered": 2, 00:17:57.874 "num_base_bdevs_operational": 2, 00:17:57.874 "base_bdevs_list": [ 00:17:57.874 { 00:17:57.874 "name": null, 00:17:57.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.874 "is_configured": false, 00:17:57.874 "data_offset": 0, 00:17:57.874 "data_size": 63488 00:17:57.874 }, 00:17:57.874 { 00:17:57.874 "name": null, 00:17:57.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.874 "is_configured": false, 00:17:57.875 "data_offset": 2048, 00:17:57.875 "data_size": 63488 00:17:57.875 }, 00:17:57.875 { 00:17:57.875 "name": "BaseBdev3", 00:17:57.875 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:57.875 "is_configured": true, 00:17:57.875 "data_offset": 2048, 00:17:57.875 "data_size": 63488 00:17:57.875 }, 00:17:57.875 { 00:17:57.875 "name": "BaseBdev4", 00:17:57.875 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:57.875 "is_configured": true, 00:17:57.875 "data_offset": 2048, 00:17:57.875 "data_size": 63488 00:17:57.875 } 00:17:57.875 ] 00:17:57.875 }' 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.875 06:27:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:58.133 [2024-11-26 06:27:42.006185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:58.133 [2024-11-26 06:27:42.006280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.133 [2024-11-26 06:27:42.006311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:58.133 [2024-11-26 06:27:42.006323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.133 [2024-11-26 06:27:42.006930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.133 [2024-11-26 06:27:42.006949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:58.134 [2024-11-26 06:27:42.007061] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:58.134 [2024-11-26 06:27:42.007100] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:58.134 [2024-11-26 06:27:42.007128] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:58.134 [2024-11-26 06:27:42.007141] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:58.134 BaseBdev1 00:17:58.134 06:27:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.134 06:27:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.069 "name": "raid_bdev1", 00:17:59.069 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:59.069 "strip_size_kb": 0, 00:17:59.069 "state": "online", 00:17:59.069 "raid_level": "raid1", 00:17:59.069 "superblock": true, 00:17:59.069 "num_base_bdevs": 4, 00:17:59.069 "num_base_bdevs_discovered": 2, 00:17:59.069 "num_base_bdevs_operational": 2, 00:17:59.069 "base_bdevs_list": [ 00:17:59.069 { 00:17:59.069 "name": null, 00:17:59.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.069 "is_configured": false, 00:17:59.069 "data_offset": 0, 00:17:59.069 "data_size": 63488 00:17:59.069 }, 00:17:59.069 { 00:17:59.069 "name": null, 00:17:59.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.069 "is_configured": false, 00:17:59.069 "data_offset": 2048, 00:17:59.069 "data_size": 63488 00:17:59.069 }, 00:17:59.069 { 00:17:59.069 "name": "BaseBdev3", 00:17:59.069 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:59.069 "is_configured": true, 00:17:59.069 "data_offset": 2048, 00:17:59.069 "data_size": 63488 00:17:59.069 }, 00:17:59.069 { 00:17:59.069 "name": "BaseBdev4", 00:17:59.069 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:59.069 "is_configured": true, 00:17:59.069 "data_offset": 2048, 00:17:59.069 "data_size": 63488 00:17:59.069 } 00:17:59.069 ] 00:17:59.069 }' 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.069 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.637 "name": "raid_bdev1", 00:17:59.637 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:17:59.637 "strip_size_kb": 0, 00:17:59.637 "state": "online", 00:17:59.637 "raid_level": "raid1", 00:17:59.637 "superblock": true, 00:17:59.637 "num_base_bdevs": 4, 00:17:59.637 "num_base_bdevs_discovered": 2, 00:17:59.637 "num_base_bdevs_operational": 2, 00:17:59.637 "base_bdevs_list": [ 00:17:59.637 { 00:17:59.637 "name": null, 00:17:59.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.637 "is_configured": false, 00:17:59.637 "data_offset": 0, 00:17:59.637 "data_size": 63488 00:17:59.637 }, 00:17:59.637 { 00:17:59.637 "name": null, 00:17:59.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.637 "is_configured": false, 00:17:59.637 "data_offset": 2048, 00:17:59.637 "data_size": 63488 00:17:59.637 }, 00:17:59.637 { 00:17:59.637 "name": "BaseBdev3", 00:17:59.637 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:17:59.637 "is_configured": true, 00:17:59.637 "data_offset": 2048, 00:17:59.637 "data_size": 63488 00:17:59.637 }, 00:17:59.637 { 00:17:59.637 "name": "BaseBdev4", 00:17:59.637 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:17:59.637 "is_configured": true, 00:17:59.637 "data_offset": 2048, 00:17:59.637 "data_size": 63488 00:17:59.637 } 00:17:59.637 ] 00:17:59.637 }' 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.637 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:59.638 [2024-11-26 06:27:43.635946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.638 [2024-11-26 06:27:43.636253] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:59.638 [2024-11-26 06:27:43.636318] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:59.638 request: 00:17:59.638 { 00:17:59.638 "base_bdev": "BaseBdev1", 00:17:59.638 "raid_bdev": "raid_bdev1", 00:17:59.638 "method": "bdev_raid_add_base_bdev", 00:17:59.638 "req_id": 1 00:17:59.638 } 00:17:59.638 Got JSON-RPC error response 00:17:59.638 response: 00:17:59.638 { 00:17:59.638 "code": -22, 00:17:59.638 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:59.638 } 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.638 06:27:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.575 "name": "raid_bdev1", 00:18:00.575 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:18:00.575 "strip_size_kb": 0, 00:18:00.575 "state": "online", 00:18:00.575 "raid_level": "raid1", 00:18:00.575 "superblock": true, 00:18:00.575 "num_base_bdevs": 4, 00:18:00.575 "num_base_bdevs_discovered": 2, 00:18:00.575 "num_base_bdevs_operational": 2, 00:18:00.575 "base_bdevs_list": [ 00:18:00.575 { 00:18:00.575 "name": null, 00:18:00.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.575 "is_configured": false, 00:18:00.575 "data_offset": 0, 00:18:00.575 "data_size": 63488 00:18:00.575 }, 00:18:00.575 { 00:18:00.575 "name": null, 00:18:00.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.575 "is_configured": false, 00:18:00.575 "data_offset": 2048, 00:18:00.575 "data_size": 63488 00:18:00.575 }, 00:18:00.575 { 00:18:00.575 "name": "BaseBdev3", 00:18:00.575 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:18:00.575 "is_configured": true, 00:18:00.575 "data_offset": 2048, 00:18:00.575 "data_size": 63488 00:18:00.575 }, 00:18:00.575 { 00:18:00.575 "name": "BaseBdev4", 00:18:00.575 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:18:00.575 "is_configured": true, 00:18:00.575 "data_offset": 2048, 00:18:00.575 "data_size": 63488 00:18:00.575 } 00:18:00.575 ] 00:18:00.575 }' 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.575 06:27:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.142 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.142 "name": "raid_bdev1", 00:18:01.142 "uuid": "c0936633-71cb-4a46-83ed-8d5d6c91d1d2", 00:18:01.142 "strip_size_kb": 0, 00:18:01.142 "state": "online", 00:18:01.142 "raid_level": "raid1", 00:18:01.142 "superblock": true, 00:18:01.142 "num_base_bdevs": 4, 00:18:01.142 "num_base_bdevs_discovered": 2, 00:18:01.142 "num_base_bdevs_operational": 2, 00:18:01.142 "base_bdevs_list": [ 00:18:01.142 { 00:18:01.142 "name": null, 00:18:01.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.142 "is_configured": false, 00:18:01.142 "data_offset": 0, 00:18:01.142 "data_size": 63488 00:18:01.142 }, 00:18:01.142 { 00:18:01.142 "name": null, 00:18:01.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.143 "is_configured": false, 00:18:01.143 "data_offset": 2048, 00:18:01.143 "data_size": 63488 00:18:01.143 }, 00:18:01.143 { 00:18:01.143 "name": "BaseBdev3", 00:18:01.143 "uuid": "ef395dd2-4789-59a6-8644-2d9a66e35a24", 00:18:01.143 "is_configured": true, 00:18:01.143 "data_offset": 2048, 00:18:01.143 "data_size": 63488 00:18:01.143 }, 00:18:01.143 { 00:18:01.143 "name": "BaseBdev4", 00:18:01.143 "uuid": "1d642367-38af-53c8-9098-e0d148bbc38a", 00:18:01.143 "is_configured": true, 00:18:01.143 "data_offset": 2048, 00:18:01.143 "data_size": 63488 00:18:01.143 } 00:18:01.143 ] 00:18:01.143 }' 00:18:01.143 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.143 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.143 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79718 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79718 ']' 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79718 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79718 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79718' 00:18:01.401 killing process with pid 79718 00:18:01.401 Received shutdown signal, test time was about 18.365765 seconds 00:18:01.401 00:18:01.401 Latency(us) 00:18:01.401 [2024-11-26T06:27:45.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.401 [2024-11-26T06:27:45.538Z] =================================================================================================================== 00:18:01.401 [2024-11-26T06:27:45.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79718 00:18:01.401 [2024-11-26 06:27:45.360362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:01.401 06:27:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79718 00:18:01.401 [2024-11-26 06:27:45.360543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.401 [2024-11-26 06:27:45.360638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.401 [2024-11-26 06:27:45.360659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:01.968 [2024-11-26 06:27:45.845340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.347 06:27:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:03.347 00:18:03.347 real 0m22.096s 00:18:03.347 user 0m28.651s 00:18:03.347 sys 0m2.877s 00:18:03.347 ************************************ 00:18:03.347 END TEST raid_rebuild_test_sb_io 00:18:03.347 ************************************ 00:18:03.347 06:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:03.347 06:27:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:03.347 06:27:47 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:03.347 06:27:47 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:03.347 06:27:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:03.347 06:27:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:03.347 06:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.347 ************************************ 00:18:03.347 START TEST raid5f_state_function_test 00:18:03.347 ************************************ 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80440 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80440' 00:18:03.347 Process raid pid: 80440 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80440 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80440 ']' 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.347 06:27:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.347 [2024-11-26 06:27:47.377272] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:18:03.347 [2024-11-26 06:27:47.377531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.606 [2024-11-26 06:27:47.561660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.606 [2024-11-26 06:27:47.718690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.865 [2024-11-26 06:27:47.980642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.865 [2024-11-26 06:27:47.980799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.433 [2024-11-26 06:27:48.268540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.433 [2024-11-26 06:27:48.268676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.433 [2024-11-26 06:27:48.268713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.433 [2024-11-26 06:27:48.268742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.433 [2024-11-26 06:27:48.268799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.433 [2024-11-26 06:27:48.268843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.433 "name": "Existed_Raid", 00:18:04.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.433 "strip_size_kb": 64, 00:18:04.433 "state": "configuring", 00:18:04.433 "raid_level": "raid5f", 00:18:04.433 "superblock": false, 00:18:04.433 "num_base_bdevs": 3, 00:18:04.433 "num_base_bdevs_discovered": 0, 00:18:04.433 "num_base_bdevs_operational": 3, 00:18:04.433 "base_bdevs_list": [ 00:18:04.433 { 00:18:04.433 "name": "BaseBdev1", 00:18:04.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.433 "is_configured": false, 00:18:04.433 "data_offset": 0, 00:18:04.433 "data_size": 0 00:18:04.433 }, 00:18:04.433 { 00:18:04.433 "name": "BaseBdev2", 00:18:04.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.433 "is_configured": false, 00:18:04.433 "data_offset": 0, 00:18:04.433 "data_size": 0 00:18:04.433 }, 00:18:04.433 { 00:18:04.433 "name": "BaseBdev3", 00:18:04.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.433 "is_configured": false, 00:18:04.433 "data_offset": 0, 00:18:04.433 "data_size": 0 00:18:04.433 } 00:18:04.433 ] 00:18:04.433 }' 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.433 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.693 [2024-11-26 06:27:48.755638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:04.693 [2024-11-26 06:27:48.755684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.693 [2024-11-26 06:27:48.763605] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.693 [2024-11-26 06:27:48.763659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.693 [2024-11-26 06:27:48.763669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.693 [2024-11-26 06:27:48.763679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.693 [2024-11-26 06:27:48.763686] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.693 [2024-11-26 06:27:48.763696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.693 [2024-11-26 06:27:48.817955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.693 BaseBdev1 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.693 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.953 [ 00:18:04.953 { 00:18:04.953 "name": "BaseBdev1", 00:18:04.953 "aliases": [ 00:18:04.953 "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2" 00:18:04.953 ], 00:18:04.953 "product_name": "Malloc disk", 00:18:04.953 "block_size": 512, 00:18:04.953 "num_blocks": 65536, 00:18:04.953 "uuid": "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2", 00:18:04.953 "assigned_rate_limits": { 00:18:04.953 "rw_ios_per_sec": 0, 00:18:04.953 "rw_mbytes_per_sec": 0, 00:18:04.953 "r_mbytes_per_sec": 0, 00:18:04.953 "w_mbytes_per_sec": 0 00:18:04.953 }, 00:18:04.953 "claimed": true, 00:18:04.953 "claim_type": "exclusive_write", 00:18:04.953 "zoned": false, 00:18:04.953 "supported_io_types": { 00:18:04.953 "read": true, 00:18:04.953 "write": true, 00:18:04.953 "unmap": true, 00:18:04.953 "flush": true, 00:18:04.953 "reset": true, 00:18:04.953 "nvme_admin": false, 00:18:04.953 "nvme_io": false, 00:18:04.953 "nvme_io_md": false, 00:18:04.953 "write_zeroes": true, 00:18:04.953 "zcopy": true, 00:18:04.953 "get_zone_info": false, 00:18:04.953 "zone_management": false, 00:18:04.953 "zone_append": false, 00:18:04.953 "compare": false, 00:18:04.953 "compare_and_write": false, 00:18:04.953 "abort": true, 00:18:04.953 "seek_hole": false, 00:18:04.953 "seek_data": false, 00:18:04.953 "copy": true, 00:18:04.953 "nvme_iov_md": false 00:18:04.953 }, 00:18:04.953 "memory_domains": [ 00:18:04.953 { 00:18:04.953 "dma_device_id": "system", 00:18:04.953 "dma_device_type": 1 00:18:04.953 }, 00:18:04.953 { 00:18:04.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.953 "dma_device_type": 2 00:18:04.953 } 00:18:04.953 ], 00:18:04.953 "driver_specific": {} 00:18:04.953 } 00:18:04.953 ] 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.953 "name": "Existed_Raid", 00:18:04.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.953 "strip_size_kb": 64, 00:18:04.953 "state": "configuring", 00:18:04.953 "raid_level": "raid5f", 00:18:04.953 "superblock": false, 00:18:04.953 "num_base_bdevs": 3, 00:18:04.953 "num_base_bdevs_discovered": 1, 00:18:04.953 "num_base_bdevs_operational": 3, 00:18:04.953 "base_bdevs_list": [ 00:18:04.953 { 00:18:04.953 "name": "BaseBdev1", 00:18:04.953 "uuid": "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2", 00:18:04.953 "is_configured": true, 00:18:04.953 "data_offset": 0, 00:18:04.953 "data_size": 65536 00:18:04.953 }, 00:18:04.953 { 00:18:04.953 "name": "BaseBdev2", 00:18:04.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.953 "is_configured": false, 00:18:04.953 "data_offset": 0, 00:18:04.953 "data_size": 0 00:18:04.953 }, 00:18:04.953 { 00:18:04.953 "name": "BaseBdev3", 00:18:04.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.953 "is_configured": false, 00:18:04.953 "data_offset": 0, 00:18:04.953 "data_size": 0 00:18:04.953 } 00:18:04.953 ] 00:18:04.953 }' 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.953 06:27:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.535 [2024-11-26 06:27:49.361089] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.535 [2024-11-26 06:27:49.361155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.535 [2024-11-26 06:27:49.373114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.535 [2024-11-26 06:27:49.375367] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.535 [2024-11-26 06:27:49.375413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.535 [2024-11-26 06:27:49.375424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.535 [2024-11-26 06:27:49.375448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.535 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.536 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.536 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.536 "name": "Existed_Raid", 00:18:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.536 "strip_size_kb": 64, 00:18:05.536 "state": "configuring", 00:18:05.536 "raid_level": "raid5f", 00:18:05.536 "superblock": false, 00:18:05.536 "num_base_bdevs": 3, 00:18:05.536 "num_base_bdevs_discovered": 1, 00:18:05.536 "num_base_bdevs_operational": 3, 00:18:05.536 "base_bdevs_list": [ 00:18:05.536 { 00:18:05.536 "name": "BaseBdev1", 00:18:05.536 "uuid": "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2", 00:18:05.536 "is_configured": true, 00:18:05.536 "data_offset": 0, 00:18:05.536 "data_size": 65536 00:18:05.536 }, 00:18:05.536 { 00:18:05.536 "name": "BaseBdev2", 00:18:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.536 "is_configured": false, 00:18:05.536 "data_offset": 0, 00:18:05.536 "data_size": 0 00:18:05.536 }, 00:18:05.536 { 00:18:05.536 "name": "BaseBdev3", 00:18:05.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.536 "is_configured": false, 00:18:05.536 "data_offset": 0, 00:18:05.536 "data_size": 0 00:18:05.536 } 00:18:05.536 ] 00:18:05.536 }' 00:18:05.536 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.536 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.795 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.796 [2024-11-26 06:27:49.896422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.796 BaseBdev2 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.796 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.796 [ 00:18:05.796 { 00:18:05.796 "name": "BaseBdev2", 00:18:06.056 "aliases": [ 00:18:06.056 "f8a97eae-c0df-46af-b2a1-04e7a2759c6d" 00:18:06.056 ], 00:18:06.056 "product_name": "Malloc disk", 00:18:06.056 "block_size": 512, 00:18:06.056 "num_blocks": 65536, 00:18:06.056 "uuid": "f8a97eae-c0df-46af-b2a1-04e7a2759c6d", 00:18:06.056 "assigned_rate_limits": { 00:18:06.056 "rw_ios_per_sec": 0, 00:18:06.056 "rw_mbytes_per_sec": 0, 00:18:06.056 "r_mbytes_per_sec": 0, 00:18:06.056 "w_mbytes_per_sec": 0 00:18:06.056 }, 00:18:06.056 "claimed": true, 00:18:06.056 "claim_type": "exclusive_write", 00:18:06.056 "zoned": false, 00:18:06.056 "supported_io_types": { 00:18:06.056 "read": true, 00:18:06.056 "write": true, 00:18:06.056 "unmap": true, 00:18:06.056 "flush": true, 00:18:06.056 "reset": true, 00:18:06.056 "nvme_admin": false, 00:18:06.056 "nvme_io": false, 00:18:06.056 "nvme_io_md": false, 00:18:06.056 "write_zeroes": true, 00:18:06.056 "zcopy": true, 00:18:06.056 "get_zone_info": false, 00:18:06.056 "zone_management": false, 00:18:06.056 "zone_append": false, 00:18:06.056 "compare": false, 00:18:06.056 "compare_and_write": false, 00:18:06.056 "abort": true, 00:18:06.056 "seek_hole": false, 00:18:06.056 "seek_data": false, 00:18:06.056 "copy": true, 00:18:06.056 "nvme_iov_md": false 00:18:06.056 }, 00:18:06.056 "memory_domains": [ 00:18:06.056 { 00:18:06.056 "dma_device_id": "system", 00:18:06.056 "dma_device_type": 1 00:18:06.056 }, 00:18:06.056 { 00:18:06.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.056 "dma_device_type": 2 00:18:06.056 } 00:18:06.056 ], 00:18:06.056 "driver_specific": {} 00:18:06.056 } 00:18:06.056 ] 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.056 "name": "Existed_Raid", 00:18:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.056 "strip_size_kb": 64, 00:18:06.056 "state": "configuring", 00:18:06.056 "raid_level": "raid5f", 00:18:06.056 "superblock": false, 00:18:06.056 "num_base_bdevs": 3, 00:18:06.056 "num_base_bdevs_discovered": 2, 00:18:06.056 "num_base_bdevs_operational": 3, 00:18:06.056 "base_bdevs_list": [ 00:18:06.056 { 00:18:06.056 "name": "BaseBdev1", 00:18:06.056 "uuid": "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2", 00:18:06.056 "is_configured": true, 00:18:06.056 "data_offset": 0, 00:18:06.056 "data_size": 65536 00:18:06.056 }, 00:18:06.056 { 00:18:06.056 "name": "BaseBdev2", 00:18:06.056 "uuid": "f8a97eae-c0df-46af-b2a1-04e7a2759c6d", 00:18:06.056 "is_configured": true, 00:18:06.056 "data_offset": 0, 00:18:06.056 "data_size": 65536 00:18:06.056 }, 00:18:06.056 { 00:18:06.056 "name": "BaseBdev3", 00:18:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.056 "is_configured": false, 00:18:06.056 "data_offset": 0, 00:18:06.056 "data_size": 0 00:18:06.056 } 00:18:06.056 ] 00:18:06.056 }' 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.056 06:27:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.316 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:06.316 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.316 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 [2024-11-26 06:27:50.457986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.576 [2024-11-26 06:27:50.458208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:06.576 [2024-11-26 06:27:50.458232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:06.576 [2024-11-26 06:27:50.458609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:06.576 [2024-11-26 06:27:50.464449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:06.576 [2024-11-26 06:27:50.464502] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:06.576 [2024-11-26 06:27:50.464922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.576 BaseBdev3 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.576 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.576 [ 00:18:06.576 { 00:18:06.576 "name": "BaseBdev3", 00:18:06.576 "aliases": [ 00:18:06.576 "eb109ac9-183c-4b1f-be8d-9c1a611047d1" 00:18:06.576 ], 00:18:06.576 "product_name": "Malloc disk", 00:18:06.576 "block_size": 512, 00:18:06.576 "num_blocks": 65536, 00:18:06.576 "uuid": "eb109ac9-183c-4b1f-be8d-9c1a611047d1", 00:18:06.576 "assigned_rate_limits": { 00:18:06.576 "rw_ios_per_sec": 0, 00:18:06.576 "rw_mbytes_per_sec": 0, 00:18:06.576 "r_mbytes_per_sec": 0, 00:18:06.576 "w_mbytes_per_sec": 0 00:18:06.576 }, 00:18:06.576 "claimed": true, 00:18:06.576 "claim_type": "exclusive_write", 00:18:06.576 "zoned": false, 00:18:06.576 "supported_io_types": { 00:18:06.576 "read": true, 00:18:06.576 "write": true, 00:18:06.576 "unmap": true, 00:18:06.576 "flush": true, 00:18:06.576 "reset": true, 00:18:06.576 "nvme_admin": false, 00:18:06.576 "nvme_io": false, 00:18:06.576 "nvme_io_md": false, 00:18:06.576 "write_zeroes": true, 00:18:06.576 "zcopy": true, 00:18:06.576 "get_zone_info": false, 00:18:06.576 "zone_management": false, 00:18:06.576 "zone_append": false, 00:18:06.576 "compare": false, 00:18:06.576 "compare_and_write": false, 00:18:06.576 "abort": true, 00:18:06.576 "seek_hole": false, 00:18:06.576 "seek_data": false, 00:18:06.576 "copy": true, 00:18:06.576 "nvme_iov_md": false 00:18:06.577 }, 00:18:06.577 "memory_domains": [ 00:18:06.577 { 00:18:06.577 "dma_device_id": "system", 00:18:06.577 "dma_device_type": 1 00:18:06.577 }, 00:18:06.577 { 00:18:06.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.577 "dma_device_type": 2 00:18:06.577 } 00:18:06.577 ], 00:18:06.577 "driver_specific": {} 00:18:06.577 } 00:18:06.577 ] 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.577 "name": "Existed_Raid", 00:18:06.577 "uuid": "f073b919-b20a-4880-8331-796fbd79f979", 00:18:06.577 "strip_size_kb": 64, 00:18:06.577 "state": "online", 00:18:06.577 "raid_level": "raid5f", 00:18:06.577 "superblock": false, 00:18:06.577 "num_base_bdevs": 3, 00:18:06.577 "num_base_bdevs_discovered": 3, 00:18:06.577 "num_base_bdevs_operational": 3, 00:18:06.577 "base_bdevs_list": [ 00:18:06.577 { 00:18:06.577 "name": "BaseBdev1", 00:18:06.577 "uuid": "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2", 00:18:06.577 "is_configured": true, 00:18:06.577 "data_offset": 0, 00:18:06.577 "data_size": 65536 00:18:06.577 }, 00:18:06.577 { 00:18:06.577 "name": "BaseBdev2", 00:18:06.577 "uuid": "f8a97eae-c0df-46af-b2a1-04e7a2759c6d", 00:18:06.577 "is_configured": true, 00:18:06.577 "data_offset": 0, 00:18:06.577 "data_size": 65536 00:18:06.577 }, 00:18:06.577 { 00:18:06.577 "name": "BaseBdev3", 00:18:06.577 "uuid": "eb109ac9-183c-4b1f-be8d-9c1a611047d1", 00:18:06.577 "is_configured": true, 00:18:06.577 "data_offset": 0, 00:18:06.577 "data_size": 65536 00:18:06.577 } 00:18:06.577 ] 00:18:06.577 }' 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.577 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.148 06:27:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.148 [2024-11-26 06:27:50.995734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.148 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.148 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:07.148 "name": "Existed_Raid", 00:18:07.148 "aliases": [ 00:18:07.148 "f073b919-b20a-4880-8331-796fbd79f979" 00:18:07.148 ], 00:18:07.149 "product_name": "Raid Volume", 00:18:07.149 "block_size": 512, 00:18:07.149 "num_blocks": 131072, 00:18:07.149 "uuid": "f073b919-b20a-4880-8331-796fbd79f979", 00:18:07.149 "assigned_rate_limits": { 00:18:07.149 "rw_ios_per_sec": 0, 00:18:07.149 "rw_mbytes_per_sec": 0, 00:18:07.149 "r_mbytes_per_sec": 0, 00:18:07.149 "w_mbytes_per_sec": 0 00:18:07.149 }, 00:18:07.149 "claimed": false, 00:18:07.149 "zoned": false, 00:18:07.149 "supported_io_types": { 00:18:07.149 "read": true, 00:18:07.149 "write": true, 00:18:07.149 "unmap": false, 00:18:07.149 "flush": false, 00:18:07.149 "reset": true, 00:18:07.149 "nvme_admin": false, 00:18:07.149 "nvme_io": false, 00:18:07.149 "nvme_io_md": false, 00:18:07.149 "write_zeroes": true, 00:18:07.149 "zcopy": false, 00:18:07.149 "get_zone_info": false, 00:18:07.149 "zone_management": false, 00:18:07.149 "zone_append": false, 00:18:07.149 "compare": false, 00:18:07.149 "compare_and_write": false, 00:18:07.149 "abort": false, 00:18:07.149 "seek_hole": false, 00:18:07.149 "seek_data": false, 00:18:07.149 "copy": false, 00:18:07.149 "nvme_iov_md": false 00:18:07.149 }, 00:18:07.149 "driver_specific": { 00:18:07.149 "raid": { 00:18:07.149 "uuid": "f073b919-b20a-4880-8331-796fbd79f979", 00:18:07.149 "strip_size_kb": 64, 00:18:07.149 "state": "online", 00:18:07.149 "raid_level": "raid5f", 00:18:07.149 "superblock": false, 00:18:07.149 "num_base_bdevs": 3, 00:18:07.149 "num_base_bdevs_discovered": 3, 00:18:07.149 "num_base_bdevs_operational": 3, 00:18:07.149 "base_bdevs_list": [ 00:18:07.149 { 00:18:07.149 "name": "BaseBdev1", 00:18:07.149 "uuid": "8c64a3e6-f72b-43ab-96d5-8750afa0fbd2", 00:18:07.149 "is_configured": true, 00:18:07.149 "data_offset": 0, 00:18:07.149 "data_size": 65536 00:18:07.149 }, 00:18:07.149 { 00:18:07.149 "name": "BaseBdev2", 00:18:07.149 "uuid": "f8a97eae-c0df-46af-b2a1-04e7a2759c6d", 00:18:07.149 "is_configured": true, 00:18:07.149 "data_offset": 0, 00:18:07.149 "data_size": 65536 00:18:07.149 }, 00:18:07.149 { 00:18:07.149 "name": "BaseBdev3", 00:18:07.149 "uuid": "eb109ac9-183c-4b1f-be8d-9c1a611047d1", 00:18:07.149 "is_configured": true, 00:18:07.149 "data_offset": 0, 00:18:07.149 "data_size": 65536 00:18:07.149 } 00:18:07.149 ] 00:18:07.149 } 00:18:07.149 } 00:18:07.149 }' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:07.149 BaseBdev2 00:18:07.149 BaseBdev3' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.149 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.149 [2024-11-26 06:27:51.271082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.409 "name": "Existed_Raid", 00:18:07.409 "uuid": "f073b919-b20a-4880-8331-796fbd79f979", 00:18:07.409 "strip_size_kb": 64, 00:18:07.409 "state": "online", 00:18:07.409 "raid_level": "raid5f", 00:18:07.409 "superblock": false, 00:18:07.409 "num_base_bdevs": 3, 00:18:07.409 "num_base_bdevs_discovered": 2, 00:18:07.409 "num_base_bdevs_operational": 2, 00:18:07.409 "base_bdevs_list": [ 00:18:07.409 { 00:18:07.409 "name": null, 00:18:07.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.409 "is_configured": false, 00:18:07.409 "data_offset": 0, 00:18:07.409 "data_size": 65536 00:18:07.409 }, 00:18:07.409 { 00:18:07.409 "name": "BaseBdev2", 00:18:07.409 "uuid": "f8a97eae-c0df-46af-b2a1-04e7a2759c6d", 00:18:07.409 "is_configured": true, 00:18:07.409 "data_offset": 0, 00:18:07.409 "data_size": 65536 00:18:07.409 }, 00:18:07.409 { 00:18:07.409 "name": "BaseBdev3", 00:18:07.409 "uuid": "eb109ac9-183c-4b1f-be8d-9c1a611047d1", 00:18:07.409 "is_configured": true, 00:18:07.409 "data_offset": 0, 00:18:07.409 "data_size": 65536 00:18:07.409 } 00:18:07.409 ] 00:18:07.409 }' 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.409 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.977 06:27:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 [2024-11-26 06:27:51.897504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:07.977 [2024-11-26 06:27:51.897644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.977 [2024-11-26 06:27:52.006453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.977 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 [2024-11-26 06:27:52.070347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:07.977 [2024-11-26 06:27:52.070409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.236 BaseBdev2 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.236 [ 00:18:08.236 { 00:18:08.236 "name": "BaseBdev2", 00:18:08.236 "aliases": [ 00:18:08.236 "7994e28e-4b79-4e3e-ad36-896ba25fc5ce" 00:18:08.236 ], 00:18:08.236 "product_name": "Malloc disk", 00:18:08.236 "block_size": 512, 00:18:08.236 "num_blocks": 65536, 00:18:08.236 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:08.236 "assigned_rate_limits": { 00:18:08.236 "rw_ios_per_sec": 0, 00:18:08.236 "rw_mbytes_per_sec": 0, 00:18:08.236 "r_mbytes_per_sec": 0, 00:18:08.236 "w_mbytes_per_sec": 0 00:18:08.236 }, 00:18:08.236 "claimed": false, 00:18:08.236 "zoned": false, 00:18:08.236 "supported_io_types": { 00:18:08.236 "read": true, 00:18:08.236 "write": true, 00:18:08.236 "unmap": true, 00:18:08.236 "flush": true, 00:18:08.236 "reset": true, 00:18:08.236 "nvme_admin": false, 00:18:08.236 "nvme_io": false, 00:18:08.236 "nvme_io_md": false, 00:18:08.236 "write_zeroes": true, 00:18:08.236 "zcopy": true, 00:18:08.236 "get_zone_info": false, 00:18:08.236 "zone_management": false, 00:18:08.236 "zone_append": false, 00:18:08.236 "compare": false, 00:18:08.236 "compare_and_write": false, 00:18:08.236 "abort": true, 00:18:08.236 "seek_hole": false, 00:18:08.236 "seek_data": false, 00:18:08.236 "copy": true, 00:18:08.236 "nvme_iov_md": false 00:18:08.236 }, 00:18:08.236 "memory_domains": [ 00:18:08.236 { 00:18:08.236 "dma_device_id": "system", 00:18:08.236 "dma_device_type": 1 00:18:08.236 }, 00:18:08.236 { 00:18:08.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.236 "dma_device_type": 2 00:18:08.236 } 00:18:08.236 ], 00:18:08.236 "driver_specific": {} 00:18:08.236 } 00:18:08.236 ] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.236 BaseBdev3 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.236 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.495 [ 00:18:08.495 { 00:18:08.495 "name": "BaseBdev3", 00:18:08.495 "aliases": [ 00:18:08.495 "3176ad83-e9f8-4429-b145-d08aa2f6b0da" 00:18:08.495 ], 00:18:08.495 "product_name": "Malloc disk", 00:18:08.495 "block_size": 512, 00:18:08.495 "num_blocks": 65536, 00:18:08.495 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:08.495 "assigned_rate_limits": { 00:18:08.495 "rw_ios_per_sec": 0, 00:18:08.495 "rw_mbytes_per_sec": 0, 00:18:08.495 "r_mbytes_per_sec": 0, 00:18:08.495 "w_mbytes_per_sec": 0 00:18:08.495 }, 00:18:08.495 "claimed": false, 00:18:08.495 "zoned": false, 00:18:08.495 "supported_io_types": { 00:18:08.495 "read": true, 00:18:08.495 "write": true, 00:18:08.495 "unmap": true, 00:18:08.495 "flush": true, 00:18:08.495 "reset": true, 00:18:08.495 "nvme_admin": false, 00:18:08.495 "nvme_io": false, 00:18:08.495 "nvme_io_md": false, 00:18:08.495 "write_zeroes": true, 00:18:08.495 "zcopy": true, 00:18:08.495 "get_zone_info": false, 00:18:08.495 "zone_management": false, 00:18:08.495 "zone_append": false, 00:18:08.495 "compare": false, 00:18:08.495 "compare_and_write": false, 00:18:08.495 "abort": true, 00:18:08.495 "seek_hole": false, 00:18:08.495 "seek_data": false, 00:18:08.495 "copy": true, 00:18:08.495 "nvme_iov_md": false 00:18:08.495 }, 00:18:08.495 "memory_domains": [ 00:18:08.495 { 00:18:08.495 "dma_device_id": "system", 00:18:08.495 "dma_device_type": 1 00:18:08.495 }, 00:18:08.495 { 00:18:08.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:08.495 "dma_device_type": 2 00:18:08.495 } 00:18:08.495 ], 00:18:08.495 "driver_specific": {} 00:18:08.495 } 00:18:08.495 ] 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.495 [2024-11-26 06:27:52.404885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.495 [2024-11-26 06:27:52.404939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.495 [2024-11-26 06:27:52.404963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.495 [2024-11-26 06:27:52.407186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.495 "name": "Existed_Raid", 00:18:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.495 "strip_size_kb": 64, 00:18:08.495 "state": "configuring", 00:18:08.495 "raid_level": "raid5f", 00:18:08.495 "superblock": false, 00:18:08.495 "num_base_bdevs": 3, 00:18:08.495 "num_base_bdevs_discovered": 2, 00:18:08.495 "num_base_bdevs_operational": 3, 00:18:08.495 "base_bdevs_list": [ 00:18:08.495 { 00:18:08.495 "name": "BaseBdev1", 00:18:08.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.495 "is_configured": false, 00:18:08.495 "data_offset": 0, 00:18:08.495 "data_size": 0 00:18:08.495 }, 00:18:08.495 { 00:18:08.495 "name": "BaseBdev2", 00:18:08.495 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:08.495 "is_configured": true, 00:18:08.495 "data_offset": 0, 00:18:08.495 "data_size": 65536 00:18:08.495 }, 00:18:08.495 { 00:18:08.495 "name": "BaseBdev3", 00:18:08.495 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:08.495 "is_configured": true, 00:18:08.495 "data_offset": 0, 00:18:08.495 "data_size": 65536 00:18:08.495 } 00:18:08.495 ] 00:18:08.495 }' 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.495 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.753 [2024-11-26 06:27:52.876129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.753 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.012 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.012 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.012 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.012 "name": "Existed_Raid", 00:18:09.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.012 "strip_size_kb": 64, 00:18:09.012 "state": "configuring", 00:18:09.012 "raid_level": "raid5f", 00:18:09.012 "superblock": false, 00:18:09.012 "num_base_bdevs": 3, 00:18:09.012 "num_base_bdevs_discovered": 1, 00:18:09.012 "num_base_bdevs_operational": 3, 00:18:09.012 "base_bdevs_list": [ 00:18:09.012 { 00:18:09.012 "name": "BaseBdev1", 00:18:09.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.012 "is_configured": false, 00:18:09.012 "data_offset": 0, 00:18:09.012 "data_size": 0 00:18:09.012 }, 00:18:09.012 { 00:18:09.012 "name": null, 00:18:09.012 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:09.012 "is_configured": false, 00:18:09.012 "data_offset": 0, 00:18:09.012 "data_size": 65536 00:18:09.012 }, 00:18:09.012 { 00:18:09.012 "name": "BaseBdev3", 00:18:09.012 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:09.012 "is_configured": true, 00:18:09.012 "data_offset": 0, 00:18:09.012 "data_size": 65536 00:18:09.012 } 00:18:09.012 ] 00:18:09.012 }' 00:18:09.012 06:27:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.012 06:27:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.291 [2024-11-26 06:27:53.397806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.291 BaseBdev1 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.291 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.566 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.566 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.566 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.566 [ 00:18:09.566 { 00:18:09.566 "name": "BaseBdev1", 00:18:09.566 "aliases": [ 00:18:09.566 "2d612200-3d21-4df7-9b44-9d4921139940" 00:18:09.566 ], 00:18:09.566 "product_name": "Malloc disk", 00:18:09.566 "block_size": 512, 00:18:09.566 "num_blocks": 65536, 00:18:09.567 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:09.567 "assigned_rate_limits": { 00:18:09.567 "rw_ios_per_sec": 0, 00:18:09.567 "rw_mbytes_per_sec": 0, 00:18:09.567 "r_mbytes_per_sec": 0, 00:18:09.567 "w_mbytes_per_sec": 0 00:18:09.567 }, 00:18:09.567 "claimed": true, 00:18:09.567 "claim_type": "exclusive_write", 00:18:09.567 "zoned": false, 00:18:09.567 "supported_io_types": { 00:18:09.567 "read": true, 00:18:09.567 "write": true, 00:18:09.567 "unmap": true, 00:18:09.567 "flush": true, 00:18:09.567 "reset": true, 00:18:09.567 "nvme_admin": false, 00:18:09.567 "nvme_io": false, 00:18:09.567 "nvme_io_md": false, 00:18:09.567 "write_zeroes": true, 00:18:09.567 "zcopy": true, 00:18:09.567 "get_zone_info": false, 00:18:09.567 "zone_management": false, 00:18:09.567 "zone_append": false, 00:18:09.567 "compare": false, 00:18:09.567 "compare_and_write": false, 00:18:09.567 "abort": true, 00:18:09.567 "seek_hole": false, 00:18:09.567 "seek_data": false, 00:18:09.567 "copy": true, 00:18:09.567 "nvme_iov_md": false 00:18:09.567 }, 00:18:09.567 "memory_domains": [ 00:18:09.567 { 00:18:09.567 "dma_device_id": "system", 00:18:09.567 "dma_device_type": 1 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.567 "dma_device_type": 2 00:18:09.567 } 00:18:09.567 ], 00:18:09.567 "driver_specific": {} 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.567 "name": "Existed_Raid", 00:18:09.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.567 "strip_size_kb": 64, 00:18:09.567 "state": "configuring", 00:18:09.567 "raid_level": "raid5f", 00:18:09.567 "superblock": false, 00:18:09.567 "num_base_bdevs": 3, 00:18:09.567 "num_base_bdevs_discovered": 2, 00:18:09.567 "num_base_bdevs_operational": 3, 00:18:09.567 "base_bdevs_list": [ 00:18:09.567 { 00:18:09.567 "name": "BaseBdev1", 00:18:09.567 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:09.567 "is_configured": true, 00:18:09.567 "data_offset": 0, 00:18:09.567 "data_size": 65536 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "name": null, 00:18:09.567 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:09.567 "is_configured": false, 00:18:09.567 "data_offset": 0, 00:18:09.567 "data_size": 65536 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "name": "BaseBdev3", 00:18:09.567 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:09.567 "is_configured": true, 00:18:09.567 "data_offset": 0, 00:18:09.567 "data_size": 65536 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }' 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.567 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.825 [2024-11-26 06:27:53.921014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.825 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.826 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.826 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.826 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.826 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.084 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.084 "name": "Existed_Raid", 00:18:10.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.084 "strip_size_kb": 64, 00:18:10.084 "state": "configuring", 00:18:10.084 "raid_level": "raid5f", 00:18:10.084 "superblock": false, 00:18:10.084 "num_base_bdevs": 3, 00:18:10.084 "num_base_bdevs_discovered": 1, 00:18:10.084 "num_base_bdevs_operational": 3, 00:18:10.084 "base_bdevs_list": [ 00:18:10.084 { 00:18:10.084 "name": "BaseBdev1", 00:18:10.084 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:10.084 "is_configured": true, 00:18:10.084 "data_offset": 0, 00:18:10.084 "data_size": 65536 00:18:10.084 }, 00:18:10.084 { 00:18:10.085 "name": null, 00:18:10.085 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:10.085 "is_configured": false, 00:18:10.085 "data_offset": 0, 00:18:10.085 "data_size": 65536 00:18:10.085 }, 00:18:10.085 { 00:18:10.085 "name": null, 00:18:10.085 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:10.085 "is_configured": false, 00:18:10.085 "data_offset": 0, 00:18:10.085 "data_size": 65536 00:18:10.085 } 00:18:10.085 ] 00:18:10.085 }' 00:18:10.085 06:27:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.085 06:27:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.344 [2024-11-26 06:27:54.456153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.344 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.603 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.603 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.603 "name": "Existed_Raid", 00:18:10.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.603 "strip_size_kb": 64, 00:18:10.603 "state": "configuring", 00:18:10.603 "raid_level": "raid5f", 00:18:10.603 "superblock": false, 00:18:10.603 "num_base_bdevs": 3, 00:18:10.603 "num_base_bdevs_discovered": 2, 00:18:10.603 "num_base_bdevs_operational": 3, 00:18:10.603 "base_bdevs_list": [ 00:18:10.603 { 00:18:10.603 "name": "BaseBdev1", 00:18:10.603 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:10.603 "is_configured": true, 00:18:10.603 "data_offset": 0, 00:18:10.603 "data_size": 65536 00:18:10.603 }, 00:18:10.603 { 00:18:10.603 "name": null, 00:18:10.603 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:10.603 "is_configured": false, 00:18:10.603 "data_offset": 0, 00:18:10.603 "data_size": 65536 00:18:10.603 }, 00:18:10.603 { 00:18:10.603 "name": "BaseBdev3", 00:18:10.603 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:10.603 "is_configured": true, 00:18:10.603 "data_offset": 0, 00:18:10.603 "data_size": 65536 00:18:10.603 } 00:18:10.603 ] 00:18:10.603 }' 00:18:10.603 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.603 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.861 06:27:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.861 [2024-11-26 06:27:54.939284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.121 "name": "Existed_Raid", 00:18:11.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.121 "strip_size_kb": 64, 00:18:11.121 "state": "configuring", 00:18:11.121 "raid_level": "raid5f", 00:18:11.121 "superblock": false, 00:18:11.121 "num_base_bdevs": 3, 00:18:11.121 "num_base_bdevs_discovered": 1, 00:18:11.121 "num_base_bdevs_operational": 3, 00:18:11.121 "base_bdevs_list": [ 00:18:11.121 { 00:18:11.121 "name": null, 00:18:11.121 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:11.121 "is_configured": false, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 }, 00:18:11.121 { 00:18:11.121 "name": null, 00:18:11.121 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:11.121 "is_configured": false, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 }, 00:18:11.121 { 00:18:11.121 "name": "BaseBdev3", 00:18:11.121 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:11.121 "is_configured": true, 00:18:11.121 "data_offset": 0, 00:18:11.121 "data_size": 65536 00:18:11.121 } 00:18:11.121 ] 00:18:11.121 }' 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.121 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.381 [2024-11-26 06:27:55.487336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.381 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.640 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.640 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.640 "name": "Existed_Raid", 00:18:11.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.640 "strip_size_kb": 64, 00:18:11.641 "state": "configuring", 00:18:11.641 "raid_level": "raid5f", 00:18:11.641 "superblock": false, 00:18:11.641 "num_base_bdevs": 3, 00:18:11.641 "num_base_bdevs_discovered": 2, 00:18:11.641 "num_base_bdevs_operational": 3, 00:18:11.641 "base_bdevs_list": [ 00:18:11.641 { 00:18:11.641 "name": null, 00:18:11.641 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:11.641 "is_configured": false, 00:18:11.641 "data_offset": 0, 00:18:11.641 "data_size": 65536 00:18:11.641 }, 00:18:11.641 { 00:18:11.641 "name": "BaseBdev2", 00:18:11.641 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:11.641 "is_configured": true, 00:18:11.641 "data_offset": 0, 00:18:11.641 "data_size": 65536 00:18:11.641 }, 00:18:11.641 { 00:18:11.641 "name": "BaseBdev3", 00:18:11.641 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:11.641 "is_configured": true, 00:18:11.641 "data_offset": 0, 00:18:11.641 "data_size": 65536 00:18:11.641 } 00:18:11.641 ] 00:18:11.641 }' 00:18:11.641 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.641 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d612200-3d21-4df7-9b44-9d4921139940 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.900 06:27:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.900 [2024-11-26 06:27:56.030307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:11.900 [2024-11-26 06:27:56.030373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:11.900 [2024-11-26 06:27:56.030385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:11.900 [2024-11-26 06:27:56.030676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:12.160 [2024-11-26 06:27:56.036540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:12.160 [2024-11-26 06:27:56.036566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:12.160 [2024-11-26 06:27:56.036865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.160 NewBaseBdev 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.160 [ 00:18:12.160 { 00:18:12.160 "name": "NewBaseBdev", 00:18:12.160 "aliases": [ 00:18:12.160 "2d612200-3d21-4df7-9b44-9d4921139940" 00:18:12.160 ], 00:18:12.160 "product_name": "Malloc disk", 00:18:12.160 "block_size": 512, 00:18:12.160 "num_blocks": 65536, 00:18:12.160 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:12.160 "assigned_rate_limits": { 00:18:12.160 "rw_ios_per_sec": 0, 00:18:12.160 "rw_mbytes_per_sec": 0, 00:18:12.160 "r_mbytes_per_sec": 0, 00:18:12.160 "w_mbytes_per_sec": 0 00:18:12.160 }, 00:18:12.160 "claimed": true, 00:18:12.160 "claim_type": "exclusive_write", 00:18:12.160 "zoned": false, 00:18:12.160 "supported_io_types": { 00:18:12.160 "read": true, 00:18:12.160 "write": true, 00:18:12.160 "unmap": true, 00:18:12.160 "flush": true, 00:18:12.160 "reset": true, 00:18:12.160 "nvme_admin": false, 00:18:12.160 "nvme_io": false, 00:18:12.160 "nvme_io_md": false, 00:18:12.160 "write_zeroes": true, 00:18:12.160 "zcopy": true, 00:18:12.160 "get_zone_info": false, 00:18:12.160 "zone_management": false, 00:18:12.160 "zone_append": false, 00:18:12.160 "compare": false, 00:18:12.160 "compare_and_write": false, 00:18:12.160 "abort": true, 00:18:12.160 "seek_hole": false, 00:18:12.160 "seek_data": false, 00:18:12.160 "copy": true, 00:18:12.160 "nvme_iov_md": false 00:18:12.160 }, 00:18:12.160 "memory_domains": [ 00:18:12.160 { 00:18:12.160 "dma_device_id": "system", 00:18:12.160 "dma_device_type": 1 00:18:12.160 }, 00:18:12.160 { 00:18:12.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.160 "dma_device_type": 2 00:18:12.160 } 00:18:12.160 ], 00:18:12.160 "driver_specific": {} 00:18:12.160 } 00:18:12.160 ] 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.160 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.161 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.161 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.161 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.161 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.161 "name": "Existed_Raid", 00:18:12.161 "uuid": "b1e754fe-6c9a-4152-adf2-04b2b83bb6dc", 00:18:12.161 "strip_size_kb": 64, 00:18:12.161 "state": "online", 00:18:12.161 "raid_level": "raid5f", 00:18:12.161 "superblock": false, 00:18:12.161 "num_base_bdevs": 3, 00:18:12.161 "num_base_bdevs_discovered": 3, 00:18:12.161 "num_base_bdevs_operational": 3, 00:18:12.161 "base_bdevs_list": [ 00:18:12.161 { 00:18:12.161 "name": "NewBaseBdev", 00:18:12.161 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:12.161 "is_configured": true, 00:18:12.161 "data_offset": 0, 00:18:12.161 "data_size": 65536 00:18:12.161 }, 00:18:12.161 { 00:18:12.161 "name": "BaseBdev2", 00:18:12.161 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:12.161 "is_configured": true, 00:18:12.161 "data_offset": 0, 00:18:12.161 "data_size": 65536 00:18:12.161 }, 00:18:12.161 { 00:18:12.161 "name": "BaseBdev3", 00:18:12.161 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:12.161 "is_configured": true, 00:18:12.161 "data_offset": 0, 00:18:12.161 "data_size": 65536 00:18:12.161 } 00:18:12.161 ] 00:18:12.161 }' 00:18:12.161 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.161 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.420 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 [2024-11-26 06:27:56.548828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:12.679 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.679 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.679 "name": "Existed_Raid", 00:18:12.679 "aliases": [ 00:18:12.679 "b1e754fe-6c9a-4152-adf2-04b2b83bb6dc" 00:18:12.679 ], 00:18:12.679 "product_name": "Raid Volume", 00:18:12.679 "block_size": 512, 00:18:12.679 "num_blocks": 131072, 00:18:12.679 "uuid": "b1e754fe-6c9a-4152-adf2-04b2b83bb6dc", 00:18:12.679 "assigned_rate_limits": { 00:18:12.679 "rw_ios_per_sec": 0, 00:18:12.679 "rw_mbytes_per_sec": 0, 00:18:12.679 "r_mbytes_per_sec": 0, 00:18:12.679 "w_mbytes_per_sec": 0 00:18:12.679 }, 00:18:12.679 "claimed": false, 00:18:12.680 "zoned": false, 00:18:12.680 "supported_io_types": { 00:18:12.680 "read": true, 00:18:12.680 "write": true, 00:18:12.680 "unmap": false, 00:18:12.680 "flush": false, 00:18:12.680 "reset": true, 00:18:12.680 "nvme_admin": false, 00:18:12.680 "nvme_io": false, 00:18:12.680 "nvme_io_md": false, 00:18:12.680 "write_zeroes": true, 00:18:12.680 "zcopy": false, 00:18:12.680 "get_zone_info": false, 00:18:12.680 "zone_management": false, 00:18:12.680 "zone_append": false, 00:18:12.680 "compare": false, 00:18:12.680 "compare_and_write": false, 00:18:12.680 "abort": false, 00:18:12.680 "seek_hole": false, 00:18:12.680 "seek_data": false, 00:18:12.680 "copy": false, 00:18:12.680 "nvme_iov_md": false 00:18:12.680 }, 00:18:12.680 "driver_specific": { 00:18:12.680 "raid": { 00:18:12.680 "uuid": "b1e754fe-6c9a-4152-adf2-04b2b83bb6dc", 00:18:12.680 "strip_size_kb": 64, 00:18:12.680 "state": "online", 00:18:12.680 "raid_level": "raid5f", 00:18:12.680 "superblock": false, 00:18:12.680 "num_base_bdevs": 3, 00:18:12.680 "num_base_bdevs_discovered": 3, 00:18:12.680 "num_base_bdevs_operational": 3, 00:18:12.680 "base_bdevs_list": [ 00:18:12.680 { 00:18:12.680 "name": "NewBaseBdev", 00:18:12.680 "uuid": "2d612200-3d21-4df7-9b44-9d4921139940", 00:18:12.680 "is_configured": true, 00:18:12.680 "data_offset": 0, 00:18:12.680 "data_size": 65536 00:18:12.680 }, 00:18:12.680 { 00:18:12.680 "name": "BaseBdev2", 00:18:12.680 "uuid": "7994e28e-4b79-4e3e-ad36-896ba25fc5ce", 00:18:12.680 "is_configured": true, 00:18:12.680 "data_offset": 0, 00:18:12.680 "data_size": 65536 00:18:12.680 }, 00:18:12.680 { 00:18:12.680 "name": "BaseBdev3", 00:18:12.680 "uuid": "3176ad83-e9f8-4429-b145-d08aa2f6b0da", 00:18:12.680 "is_configured": true, 00:18:12.680 "data_offset": 0, 00:18:12.680 "data_size": 65536 00:18:12.680 } 00:18:12.680 ] 00:18:12.680 } 00:18:12.680 } 00:18:12.680 }' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:12.680 BaseBdev2 00:18:12.680 BaseBdev3' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.680 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 [2024-11-26 06:27:56.820192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.939 [2024-11-26 06:27:56.820228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.939 [2024-11-26 06:27:56.820346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.939 [2024-11-26 06:27:56.820702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.939 [2024-11-26 06:27:56.820741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80440 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80440 ']' 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80440 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:12.939 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.940 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80440 00:18:12.940 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:12.940 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:12.940 killing process with pid 80440 00:18:12.940 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80440' 00:18:12.940 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80440 00:18:12.940 [2024-11-26 06:27:56.872451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:12.940 06:27:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80440 00:18:13.199 [2024-11-26 06:27:57.215728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.577 06:27:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:14.577 00:18:14.577 real 0m11.203s 00:18:14.577 user 0m17.438s 00:18:14.577 sys 0m2.230s 00:18:14.577 06:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.577 06:27:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.577 ************************************ 00:18:14.577 END TEST raid5f_state_function_test 00:18:14.577 ************************************ 00:18:14.578 06:27:58 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:18:14.578 06:27:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:14.578 06:27:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.578 06:27:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.578 ************************************ 00:18:14.578 START TEST raid5f_state_function_test_sb 00:18:14.578 ************************************ 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81069 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:14.578 Process raid pid: 81069 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81069' 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81069 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81069 ']' 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.578 06:27:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.578 [2024-11-26 06:27:58.648511] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:18:14.578 [2024-11-26 06:27:58.648664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.837 [2024-11-26 06:27:58.830383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.096 [2024-11-26 06:27:58.971518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.096 [2024-11-26 06:27:59.223155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.096 [2024-11-26 06:27:59.223204] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.663 [2024-11-26 06:27:59.498534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.663 [2024-11-26 06:27:59.498597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.663 [2024-11-26 06:27:59.498609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.663 [2024-11-26 06:27:59.498619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.663 [2024-11-26 06:27:59.498626] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.663 [2024-11-26 06:27:59.498636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.663 "name": "Existed_Raid", 00:18:15.663 "uuid": "740432c0-95e8-420b-81d7-b41468210e3e", 00:18:15.663 "strip_size_kb": 64, 00:18:15.663 "state": "configuring", 00:18:15.663 "raid_level": "raid5f", 00:18:15.663 "superblock": true, 00:18:15.663 "num_base_bdevs": 3, 00:18:15.663 "num_base_bdevs_discovered": 0, 00:18:15.663 "num_base_bdevs_operational": 3, 00:18:15.663 "base_bdevs_list": [ 00:18:15.663 { 00:18:15.663 "name": "BaseBdev1", 00:18:15.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.663 "is_configured": false, 00:18:15.663 "data_offset": 0, 00:18:15.663 "data_size": 0 00:18:15.663 }, 00:18:15.663 { 00:18:15.663 "name": "BaseBdev2", 00:18:15.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.663 "is_configured": false, 00:18:15.663 "data_offset": 0, 00:18:15.663 "data_size": 0 00:18:15.663 }, 00:18:15.663 { 00:18:15.663 "name": "BaseBdev3", 00:18:15.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.663 "is_configured": false, 00:18:15.663 "data_offset": 0, 00:18:15.663 "data_size": 0 00:18:15.663 } 00:18:15.663 ] 00:18:15.663 }' 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.663 06:27:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.922 [2024-11-26 06:28:00.009635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:15.922 [2024-11-26 06:28:00.009731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.922 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.922 [2024-11-26 06:28:00.021608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.922 [2024-11-26 06:28:00.021664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.922 [2024-11-26 06:28:00.021675] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.923 [2024-11-26 06:28:00.021685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.923 [2024-11-26 06:28:00.021692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.923 [2024-11-26 06:28:00.021702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.923 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.923 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:15.923 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.923 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 [2024-11-26 06:28:00.078778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.182 BaseBdev1 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.182 [ 00:18:16.182 { 00:18:16.182 "name": "BaseBdev1", 00:18:16.182 "aliases": [ 00:18:16.182 "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e" 00:18:16.182 ], 00:18:16.182 "product_name": "Malloc disk", 00:18:16.182 "block_size": 512, 00:18:16.182 "num_blocks": 65536, 00:18:16.182 "uuid": "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e", 00:18:16.182 "assigned_rate_limits": { 00:18:16.182 "rw_ios_per_sec": 0, 00:18:16.182 "rw_mbytes_per_sec": 0, 00:18:16.182 "r_mbytes_per_sec": 0, 00:18:16.182 "w_mbytes_per_sec": 0 00:18:16.182 }, 00:18:16.182 "claimed": true, 00:18:16.182 "claim_type": "exclusive_write", 00:18:16.182 "zoned": false, 00:18:16.182 "supported_io_types": { 00:18:16.182 "read": true, 00:18:16.182 "write": true, 00:18:16.182 "unmap": true, 00:18:16.182 "flush": true, 00:18:16.182 "reset": true, 00:18:16.182 "nvme_admin": false, 00:18:16.182 "nvme_io": false, 00:18:16.182 "nvme_io_md": false, 00:18:16.182 "write_zeroes": true, 00:18:16.182 "zcopy": true, 00:18:16.182 "get_zone_info": false, 00:18:16.182 "zone_management": false, 00:18:16.182 "zone_append": false, 00:18:16.182 "compare": false, 00:18:16.182 "compare_and_write": false, 00:18:16.182 "abort": true, 00:18:16.182 "seek_hole": false, 00:18:16.182 "seek_data": false, 00:18:16.182 "copy": true, 00:18:16.182 "nvme_iov_md": false 00:18:16.182 }, 00:18:16.182 "memory_domains": [ 00:18:16.182 { 00:18:16.182 "dma_device_id": "system", 00:18:16.182 "dma_device_type": 1 00:18:16.182 }, 00:18:16.182 { 00:18:16.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.182 "dma_device_type": 2 00:18:16.182 } 00:18:16.182 ], 00:18:16.182 "driver_specific": {} 00:18:16.182 } 00:18:16.182 ] 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.182 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.183 "name": "Existed_Raid", 00:18:16.183 "uuid": "f4b37831-9641-4784-8239-95e38ef6269a", 00:18:16.183 "strip_size_kb": 64, 00:18:16.183 "state": "configuring", 00:18:16.183 "raid_level": "raid5f", 00:18:16.183 "superblock": true, 00:18:16.183 "num_base_bdevs": 3, 00:18:16.183 "num_base_bdevs_discovered": 1, 00:18:16.183 "num_base_bdevs_operational": 3, 00:18:16.183 "base_bdevs_list": [ 00:18:16.183 { 00:18:16.183 "name": "BaseBdev1", 00:18:16.183 "uuid": "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e", 00:18:16.183 "is_configured": true, 00:18:16.183 "data_offset": 2048, 00:18:16.183 "data_size": 63488 00:18:16.183 }, 00:18:16.183 { 00:18:16.183 "name": "BaseBdev2", 00:18:16.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.183 "is_configured": false, 00:18:16.183 "data_offset": 0, 00:18:16.183 "data_size": 0 00:18:16.183 }, 00:18:16.183 { 00:18:16.183 "name": "BaseBdev3", 00:18:16.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.183 "is_configured": false, 00:18:16.183 "data_offset": 0, 00:18:16.183 "data_size": 0 00:18:16.183 } 00:18:16.183 ] 00:18:16.183 }' 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.183 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.442 [2024-11-26 06:28:00.550047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.442 [2024-11-26 06:28:00.550129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.442 [2024-11-26 06:28:00.558099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.442 [2024-11-26 06:28:00.560280] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.442 [2024-11-26 06:28:00.560390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.442 [2024-11-26 06:28:00.560408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.442 [2024-11-26 06:28:00.560433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.442 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.443 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.703 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.703 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.703 "name": "Existed_Raid", 00:18:16.703 "uuid": "310e048b-564c-4df0-a1bc-5beb0d1cf729", 00:18:16.703 "strip_size_kb": 64, 00:18:16.703 "state": "configuring", 00:18:16.703 "raid_level": "raid5f", 00:18:16.703 "superblock": true, 00:18:16.703 "num_base_bdevs": 3, 00:18:16.703 "num_base_bdevs_discovered": 1, 00:18:16.703 "num_base_bdevs_operational": 3, 00:18:16.703 "base_bdevs_list": [ 00:18:16.703 { 00:18:16.703 "name": "BaseBdev1", 00:18:16.703 "uuid": "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e", 00:18:16.703 "is_configured": true, 00:18:16.703 "data_offset": 2048, 00:18:16.703 "data_size": 63488 00:18:16.703 }, 00:18:16.703 { 00:18:16.703 "name": "BaseBdev2", 00:18:16.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.703 "is_configured": false, 00:18:16.703 "data_offset": 0, 00:18:16.703 "data_size": 0 00:18:16.703 }, 00:18:16.703 { 00:18:16.703 "name": "BaseBdev3", 00:18:16.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.703 "is_configured": false, 00:18:16.703 "data_offset": 0, 00:18:16.703 "data_size": 0 00:18:16.703 } 00:18:16.703 ] 00:18:16.703 }' 00:18:16.703 06:28:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.703 06:28:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.962 [2024-11-26 06:28:01.073755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.962 BaseBdev2 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.962 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.962 [ 00:18:16.962 { 00:18:16.962 "name": "BaseBdev2", 00:18:16.962 "aliases": [ 00:18:16.962 "3eed4284-b1ab-43c0-881b-eec1045f5eeb" 00:18:17.222 ], 00:18:17.222 "product_name": "Malloc disk", 00:18:17.222 "block_size": 512, 00:18:17.222 "num_blocks": 65536, 00:18:17.222 "uuid": "3eed4284-b1ab-43c0-881b-eec1045f5eeb", 00:18:17.222 "assigned_rate_limits": { 00:18:17.222 "rw_ios_per_sec": 0, 00:18:17.222 "rw_mbytes_per_sec": 0, 00:18:17.222 "r_mbytes_per_sec": 0, 00:18:17.222 "w_mbytes_per_sec": 0 00:18:17.222 }, 00:18:17.222 "claimed": true, 00:18:17.222 "claim_type": "exclusive_write", 00:18:17.222 "zoned": false, 00:18:17.222 "supported_io_types": { 00:18:17.222 "read": true, 00:18:17.222 "write": true, 00:18:17.222 "unmap": true, 00:18:17.222 "flush": true, 00:18:17.222 "reset": true, 00:18:17.222 "nvme_admin": false, 00:18:17.222 "nvme_io": false, 00:18:17.222 "nvme_io_md": false, 00:18:17.222 "write_zeroes": true, 00:18:17.222 "zcopy": true, 00:18:17.222 "get_zone_info": false, 00:18:17.222 "zone_management": false, 00:18:17.222 "zone_append": false, 00:18:17.222 "compare": false, 00:18:17.222 "compare_and_write": false, 00:18:17.222 "abort": true, 00:18:17.222 "seek_hole": false, 00:18:17.222 "seek_data": false, 00:18:17.222 "copy": true, 00:18:17.222 "nvme_iov_md": false 00:18:17.222 }, 00:18:17.222 "memory_domains": [ 00:18:17.222 { 00:18:17.222 "dma_device_id": "system", 00:18:17.222 "dma_device_type": 1 00:18:17.222 }, 00:18:17.222 { 00:18:17.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.222 "dma_device_type": 2 00:18:17.222 } 00:18:17.222 ], 00:18:17.222 "driver_specific": {} 00:18:17.222 } 00:18:17.222 ] 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.222 "name": "Existed_Raid", 00:18:17.222 "uuid": "310e048b-564c-4df0-a1bc-5beb0d1cf729", 00:18:17.222 "strip_size_kb": 64, 00:18:17.222 "state": "configuring", 00:18:17.222 "raid_level": "raid5f", 00:18:17.222 "superblock": true, 00:18:17.222 "num_base_bdevs": 3, 00:18:17.222 "num_base_bdevs_discovered": 2, 00:18:17.222 "num_base_bdevs_operational": 3, 00:18:17.222 "base_bdevs_list": [ 00:18:17.222 { 00:18:17.222 "name": "BaseBdev1", 00:18:17.222 "uuid": "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e", 00:18:17.222 "is_configured": true, 00:18:17.222 "data_offset": 2048, 00:18:17.222 "data_size": 63488 00:18:17.222 }, 00:18:17.222 { 00:18:17.222 "name": "BaseBdev2", 00:18:17.222 "uuid": "3eed4284-b1ab-43c0-881b-eec1045f5eeb", 00:18:17.222 "is_configured": true, 00:18:17.222 "data_offset": 2048, 00:18:17.222 "data_size": 63488 00:18:17.222 }, 00:18:17.222 { 00:18:17.222 "name": "BaseBdev3", 00:18:17.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.222 "is_configured": false, 00:18:17.222 "data_offset": 0, 00:18:17.222 "data_size": 0 00:18:17.222 } 00:18:17.222 ] 00:18:17.222 }' 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.222 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.523 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:17.523 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.523 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.523 [2024-11-26 06:28:01.596955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.524 [2024-11-26 06:28:01.597474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:17.524 [2024-11-26 06:28:01.597548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:17.524 [2024-11-26 06:28:01.597913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:17.524 BaseBdev3 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.524 [2024-11-26 06:28:01.603603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:17.524 [2024-11-26 06:28:01.603657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:17.524 [2024-11-26 06:28:01.604045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.524 [ 00:18:17.524 { 00:18:17.524 "name": "BaseBdev3", 00:18:17.524 "aliases": [ 00:18:17.524 "41589445-91ac-4e34-aa4f-5711c1a64643" 00:18:17.524 ], 00:18:17.524 "product_name": "Malloc disk", 00:18:17.524 "block_size": 512, 00:18:17.524 "num_blocks": 65536, 00:18:17.524 "uuid": "41589445-91ac-4e34-aa4f-5711c1a64643", 00:18:17.524 "assigned_rate_limits": { 00:18:17.524 "rw_ios_per_sec": 0, 00:18:17.524 "rw_mbytes_per_sec": 0, 00:18:17.524 "r_mbytes_per_sec": 0, 00:18:17.524 "w_mbytes_per_sec": 0 00:18:17.524 }, 00:18:17.524 "claimed": true, 00:18:17.524 "claim_type": "exclusive_write", 00:18:17.524 "zoned": false, 00:18:17.524 "supported_io_types": { 00:18:17.524 "read": true, 00:18:17.524 "write": true, 00:18:17.524 "unmap": true, 00:18:17.524 "flush": true, 00:18:17.524 "reset": true, 00:18:17.524 "nvme_admin": false, 00:18:17.524 "nvme_io": false, 00:18:17.524 "nvme_io_md": false, 00:18:17.524 "write_zeroes": true, 00:18:17.524 "zcopy": true, 00:18:17.524 "get_zone_info": false, 00:18:17.524 "zone_management": false, 00:18:17.524 "zone_append": false, 00:18:17.524 "compare": false, 00:18:17.524 "compare_and_write": false, 00:18:17.524 "abort": true, 00:18:17.524 "seek_hole": false, 00:18:17.524 "seek_data": false, 00:18:17.524 "copy": true, 00:18:17.524 "nvme_iov_md": false 00:18:17.524 }, 00:18:17.524 "memory_domains": [ 00:18:17.524 { 00:18:17.524 "dma_device_id": "system", 00:18:17.524 "dma_device_type": 1 00:18:17.524 }, 00:18:17.524 { 00:18:17.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.524 "dma_device_type": 2 00:18:17.524 } 00:18:17.524 ], 00:18:17.524 "driver_specific": {} 00:18:17.524 } 00:18:17.524 ] 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.524 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.794 "name": "Existed_Raid", 00:18:17.794 "uuid": "310e048b-564c-4df0-a1bc-5beb0d1cf729", 00:18:17.794 "strip_size_kb": 64, 00:18:17.794 "state": "online", 00:18:17.794 "raid_level": "raid5f", 00:18:17.794 "superblock": true, 00:18:17.794 "num_base_bdevs": 3, 00:18:17.794 "num_base_bdevs_discovered": 3, 00:18:17.794 "num_base_bdevs_operational": 3, 00:18:17.794 "base_bdevs_list": [ 00:18:17.794 { 00:18:17.794 "name": "BaseBdev1", 00:18:17.794 "uuid": "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e", 00:18:17.794 "is_configured": true, 00:18:17.794 "data_offset": 2048, 00:18:17.794 "data_size": 63488 00:18:17.794 }, 00:18:17.794 { 00:18:17.794 "name": "BaseBdev2", 00:18:17.794 "uuid": "3eed4284-b1ab-43c0-881b-eec1045f5eeb", 00:18:17.794 "is_configured": true, 00:18:17.794 "data_offset": 2048, 00:18:17.794 "data_size": 63488 00:18:17.794 }, 00:18:17.794 { 00:18:17.794 "name": "BaseBdev3", 00:18:17.794 "uuid": "41589445-91ac-4e34-aa4f-5711c1a64643", 00:18:17.794 "is_configured": true, 00:18:17.794 "data_offset": 2048, 00:18:17.794 "data_size": 63488 00:18:17.794 } 00:18:17.794 ] 00:18:17.794 }' 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.794 06:28:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.053 [2024-11-26 06:28:02.090586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.053 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.053 "name": "Existed_Raid", 00:18:18.053 "aliases": [ 00:18:18.053 "310e048b-564c-4df0-a1bc-5beb0d1cf729" 00:18:18.053 ], 00:18:18.053 "product_name": "Raid Volume", 00:18:18.053 "block_size": 512, 00:18:18.053 "num_blocks": 126976, 00:18:18.053 "uuid": "310e048b-564c-4df0-a1bc-5beb0d1cf729", 00:18:18.053 "assigned_rate_limits": { 00:18:18.053 "rw_ios_per_sec": 0, 00:18:18.053 "rw_mbytes_per_sec": 0, 00:18:18.053 "r_mbytes_per_sec": 0, 00:18:18.053 "w_mbytes_per_sec": 0 00:18:18.053 }, 00:18:18.053 "claimed": false, 00:18:18.053 "zoned": false, 00:18:18.053 "supported_io_types": { 00:18:18.053 "read": true, 00:18:18.053 "write": true, 00:18:18.053 "unmap": false, 00:18:18.053 "flush": false, 00:18:18.053 "reset": true, 00:18:18.053 "nvme_admin": false, 00:18:18.054 "nvme_io": false, 00:18:18.054 "nvme_io_md": false, 00:18:18.054 "write_zeroes": true, 00:18:18.054 "zcopy": false, 00:18:18.054 "get_zone_info": false, 00:18:18.054 "zone_management": false, 00:18:18.054 "zone_append": false, 00:18:18.054 "compare": false, 00:18:18.054 "compare_and_write": false, 00:18:18.054 "abort": false, 00:18:18.054 "seek_hole": false, 00:18:18.054 "seek_data": false, 00:18:18.054 "copy": false, 00:18:18.054 "nvme_iov_md": false 00:18:18.054 }, 00:18:18.054 "driver_specific": { 00:18:18.054 "raid": { 00:18:18.054 "uuid": "310e048b-564c-4df0-a1bc-5beb0d1cf729", 00:18:18.054 "strip_size_kb": 64, 00:18:18.054 "state": "online", 00:18:18.054 "raid_level": "raid5f", 00:18:18.054 "superblock": true, 00:18:18.054 "num_base_bdevs": 3, 00:18:18.054 "num_base_bdevs_discovered": 3, 00:18:18.054 "num_base_bdevs_operational": 3, 00:18:18.054 "base_bdevs_list": [ 00:18:18.054 { 00:18:18.054 "name": "BaseBdev1", 00:18:18.054 "uuid": "f92cc6a8-d3cc-45fd-966f-1c5ae4983b5e", 00:18:18.054 "is_configured": true, 00:18:18.054 "data_offset": 2048, 00:18:18.054 "data_size": 63488 00:18:18.054 }, 00:18:18.054 { 00:18:18.054 "name": "BaseBdev2", 00:18:18.054 "uuid": "3eed4284-b1ab-43c0-881b-eec1045f5eeb", 00:18:18.054 "is_configured": true, 00:18:18.054 "data_offset": 2048, 00:18:18.054 "data_size": 63488 00:18:18.054 }, 00:18:18.054 { 00:18:18.054 "name": "BaseBdev3", 00:18:18.054 "uuid": "41589445-91ac-4e34-aa4f-5711c1a64643", 00:18:18.054 "is_configured": true, 00:18:18.054 "data_offset": 2048, 00:18:18.054 "data_size": 63488 00:18:18.054 } 00:18:18.054 ] 00:18:18.054 } 00:18:18.054 } 00:18:18.054 }' 00:18:18.054 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.054 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:18.054 BaseBdev2 00:18:18.054 BaseBdev3' 00:18:18.054 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:18.313 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.314 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:18.314 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:18.314 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:18.314 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.314 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 [2024-11-26 06:28:02.389881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.573 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.574 "name": "Existed_Raid", 00:18:18.574 "uuid": "310e048b-564c-4df0-a1bc-5beb0d1cf729", 00:18:18.574 "strip_size_kb": 64, 00:18:18.574 "state": "online", 00:18:18.574 "raid_level": "raid5f", 00:18:18.574 "superblock": true, 00:18:18.574 "num_base_bdevs": 3, 00:18:18.574 "num_base_bdevs_discovered": 2, 00:18:18.574 "num_base_bdevs_operational": 2, 00:18:18.574 "base_bdevs_list": [ 00:18:18.574 { 00:18:18.574 "name": null, 00:18:18.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.574 "is_configured": false, 00:18:18.574 "data_offset": 0, 00:18:18.574 "data_size": 63488 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "name": "BaseBdev2", 00:18:18.574 "uuid": "3eed4284-b1ab-43c0-881b-eec1045f5eeb", 00:18:18.574 "is_configured": true, 00:18:18.574 "data_offset": 2048, 00:18:18.574 "data_size": 63488 00:18:18.574 }, 00:18:18.574 { 00:18:18.574 "name": "BaseBdev3", 00:18:18.574 "uuid": "41589445-91ac-4e34-aa4f-5711c1a64643", 00:18:18.574 "is_configured": true, 00:18:18.574 "data_offset": 2048, 00:18:18.574 "data_size": 63488 00:18:18.574 } 00:18:18.574 ] 00:18:18.574 }' 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.574 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.833 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.093 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:19.093 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.093 06:28:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:19.093 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.093 06:28:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.093 [2024-11-26 06:28:02.978037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.093 [2024-11-26 06:28:02.978261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.093 [2024-11-26 06:28:03.081148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.093 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.093 [2024-11-26 06:28:03.137091] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:19.093 [2024-11-26 06:28:03.137210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.353 BaseBdev2 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.353 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.353 [ 00:18:19.353 { 00:18:19.353 "name": "BaseBdev2", 00:18:19.353 "aliases": [ 00:18:19.353 "e0f6ab0a-9857-47d0-9476-8bf1deea815b" 00:18:19.353 ], 00:18:19.353 "product_name": "Malloc disk", 00:18:19.354 "block_size": 512, 00:18:19.354 "num_blocks": 65536, 00:18:19.354 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:19.354 "assigned_rate_limits": { 00:18:19.354 "rw_ios_per_sec": 0, 00:18:19.354 "rw_mbytes_per_sec": 0, 00:18:19.354 "r_mbytes_per_sec": 0, 00:18:19.354 "w_mbytes_per_sec": 0 00:18:19.354 }, 00:18:19.354 "claimed": false, 00:18:19.354 "zoned": false, 00:18:19.354 "supported_io_types": { 00:18:19.354 "read": true, 00:18:19.354 "write": true, 00:18:19.354 "unmap": true, 00:18:19.354 "flush": true, 00:18:19.354 "reset": true, 00:18:19.354 "nvme_admin": false, 00:18:19.354 "nvme_io": false, 00:18:19.354 "nvme_io_md": false, 00:18:19.354 "write_zeroes": true, 00:18:19.354 "zcopy": true, 00:18:19.354 "get_zone_info": false, 00:18:19.354 "zone_management": false, 00:18:19.354 "zone_append": false, 00:18:19.354 "compare": false, 00:18:19.354 "compare_and_write": false, 00:18:19.354 "abort": true, 00:18:19.354 "seek_hole": false, 00:18:19.354 "seek_data": false, 00:18:19.354 "copy": true, 00:18:19.354 "nvme_iov_md": false 00:18:19.354 }, 00:18:19.354 "memory_domains": [ 00:18:19.354 { 00:18:19.354 "dma_device_id": "system", 00:18:19.354 "dma_device_type": 1 00:18:19.354 }, 00:18:19.354 { 00:18:19.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.354 "dma_device_type": 2 00:18:19.354 } 00:18:19.354 ], 00:18:19.354 "driver_specific": {} 00:18:19.354 } 00:18:19.354 ] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.354 BaseBdev3 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.354 [ 00:18:19.354 { 00:18:19.354 "name": "BaseBdev3", 00:18:19.354 "aliases": [ 00:18:19.354 "be2bd2df-864c-4abc-897c-dd364fe1d964" 00:18:19.354 ], 00:18:19.354 "product_name": "Malloc disk", 00:18:19.354 "block_size": 512, 00:18:19.354 "num_blocks": 65536, 00:18:19.354 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:19.354 "assigned_rate_limits": { 00:18:19.354 "rw_ios_per_sec": 0, 00:18:19.354 "rw_mbytes_per_sec": 0, 00:18:19.354 "r_mbytes_per_sec": 0, 00:18:19.354 "w_mbytes_per_sec": 0 00:18:19.354 }, 00:18:19.354 "claimed": false, 00:18:19.354 "zoned": false, 00:18:19.354 "supported_io_types": { 00:18:19.354 "read": true, 00:18:19.354 "write": true, 00:18:19.354 "unmap": true, 00:18:19.354 "flush": true, 00:18:19.354 "reset": true, 00:18:19.354 "nvme_admin": false, 00:18:19.354 "nvme_io": false, 00:18:19.354 "nvme_io_md": false, 00:18:19.354 "write_zeroes": true, 00:18:19.354 "zcopy": true, 00:18:19.354 "get_zone_info": false, 00:18:19.354 "zone_management": false, 00:18:19.354 "zone_append": false, 00:18:19.354 "compare": false, 00:18:19.354 "compare_and_write": false, 00:18:19.354 "abort": true, 00:18:19.354 "seek_hole": false, 00:18:19.354 "seek_data": false, 00:18:19.354 "copy": true, 00:18:19.354 "nvme_iov_md": false 00:18:19.354 }, 00:18:19.354 "memory_domains": [ 00:18:19.354 { 00:18:19.354 "dma_device_id": "system", 00:18:19.354 "dma_device_type": 1 00:18:19.354 }, 00:18:19.354 { 00:18:19.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.354 "dma_device_type": 2 00:18:19.354 } 00:18:19.354 ], 00:18:19.354 "driver_specific": {} 00:18:19.354 } 00:18:19.354 ] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.354 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.354 [2024-11-26 06:28:03.483707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.354 [2024-11-26 06:28:03.483801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.354 [2024-11-26 06:28:03.483847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.614 [2024-11-26 06:28:03.486108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.614 "name": "Existed_Raid", 00:18:19.614 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:19.614 "strip_size_kb": 64, 00:18:19.614 "state": "configuring", 00:18:19.614 "raid_level": "raid5f", 00:18:19.614 "superblock": true, 00:18:19.614 "num_base_bdevs": 3, 00:18:19.614 "num_base_bdevs_discovered": 2, 00:18:19.614 "num_base_bdevs_operational": 3, 00:18:19.614 "base_bdevs_list": [ 00:18:19.614 { 00:18:19.614 "name": "BaseBdev1", 00:18:19.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.614 "is_configured": false, 00:18:19.614 "data_offset": 0, 00:18:19.614 "data_size": 0 00:18:19.614 }, 00:18:19.614 { 00:18:19.614 "name": "BaseBdev2", 00:18:19.614 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:19.614 "is_configured": true, 00:18:19.614 "data_offset": 2048, 00:18:19.614 "data_size": 63488 00:18:19.614 }, 00:18:19.614 { 00:18:19.614 "name": "BaseBdev3", 00:18:19.614 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:19.614 "is_configured": true, 00:18:19.614 "data_offset": 2048, 00:18:19.614 "data_size": 63488 00:18:19.614 } 00:18:19.614 ] 00:18:19.614 }' 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.614 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.873 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:19.873 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.873 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.873 [2024-11-26 06:28:03.942967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.873 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.874 "name": "Existed_Raid", 00:18:19.874 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:19.874 "strip_size_kb": 64, 00:18:19.874 "state": "configuring", 00:18:19.874 "raid_level": "raid5f", 00:18:19.874 "superblock": true, 00:18:19.874 "num_base_bdevs": 3, 00:18:19.874 "num_base_bdevs_discovered": 1, 00:18:19.874 "num_base_bdevs_operational": 3, 00:18:19.874 "base_bdevs_list": [ 00:18:19.874 { 00:18:19.874 "name": "BaseBdev1", 00:18:19.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.874 "is_configured": false, 00:18:19.874 "data_offset": 0, 00:18:19.874 "data_size": 0 00:18:19.874 }, 00:18:19.874 { 00:18:19.874 "name": null, 00:18:19.874 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:19.874 "is_configured": false, 00:18:19.874 "data_offset": 0, 00:18:19.874 "data_size": 63488 00:18:19.874 }, 00:18:19.874 { 00:18:19.874 "name": "BaseBdev3", 00:18:19.874 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:19.874 "is_configured": true, 00:18:19.874 "data_offset": 2048, 00:18:19.874 "data_size": 63488 00:18:19.874 } 00:18:19.874 ] 00:18:19.874 }' 00:18:19.874 06:28:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.874 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 [2024-11-26 06:28:04.469792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.441 BaseBdev1 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 [ 00:18:20.441 { 00:18:20.441 "name": "BaseBdev1", 00:18:20.441 "aliases": [ 00:18:20.441 "41660f7d-a973-4378-9543-e74bea2842f2" 00:18:20.441 ], 00:18:20.441 "product_name": "Malloc disk", 00:18:20.441 "block_size": 512, 00:18:20.441 "num_blocks": 65536, 00:18:20.441 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:20.441 "assigned_rate_limits": { 00:18:20.441 "rw_ios_per_sec": 0, 00:18:20.441 "rw_mbytes_per_sec": 0, 00:18:20.441 "r_mbytes_per_sec": 0, 00:18:20.441 "w_mbytes_per_sec": 0 00:18:20.441 }, 00:18:20.441 "claimed": true, 00:18:20.441 "claim_type": "exclusive_write", 00:18:20.441 "zoned": false, 00:18:20.441 "supported_io_types": { 00:18:20.441 "read": true, 00:18:20.441 "write": true, 00:18:20.441 "unmap": true, 00:18:20.441 "flush": true, 00:18:20.441 "reset": true, 00:18:20.441 "nvme_admin": false, 00:18:20.441 "nvme_io": false, 00:18:20.441 "nvme_io_md": false, 00:18:20.441 "write_zeroes": true, 00:18:20.441 "zcopy": true, 00:18:20.441 "get_zone_info": false, 00:18:20.441 "zone_management": false, 00:18:20.441 "zone_append": false, 00:18:20.441 "compare": false, 00:18:20.441 "compare_and_write": false, 00:18:20.441 "abort": true, 00:18:20.441 "seek_hole": false, 00:18:20.441 "seek_data": false, 00:18:20.441 "copy": true, 00:18:20.441 "nvme_iov_md": false 00:18:20.441 }, 00:18:20.441 "memory_domains": [ 00:18:20.441 { 00:18:20.441 "dma_device_id": "system", 00:18:20.441 "dma_device_type": 1 00:18:20.441 }, 00:18:20.441 { 00:18:20.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.441 "dma_device_type": 2 00:18:20.441 } 00:18:20.441 ], 00:18:20.441 "driver_specific": {} 00:18:20.441 } 00:18:20.441 ] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.441 "name": "Existed_Raid", 00:18:20.441 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:20.441 "strip_size_kb": 64, 00:18:20.441 "state": "configuring", 00:18:20.441 "raid_level": "raid5f", 00:18:20.441 "superblock": true, 00:18:20.441 "num_base_bdevs": 3, 00:18:20.441 "num_base_bdevs_discovered": 2, 00:18:20.441 "num_base_bdevs_operational": 3, 00:18:20.441 "base_bdevs_list": [ 00:18:20.441 { 00:18:20.441 "name": "BaseBdev1", 00:18:20.441 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:20.441 "is_configured": true, 00:18:20.441 "data_offset": 2048, 00:18:20.441 "data_size": 63488 00:18:20.441 }, 00:18:20.441 { 00:18:20.441 "name": null, 00:18:20.441 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:20.441 "is_configured": false, 00:18:20.441 "data_offset": 0, 00:18:20.441 "data_size": 63488 00:18:20.441 }, 00:18:20.441 { 00:18:20.441 "name": "BaseBdev3", 00:18:20.441 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:20.441 "is_configured": true, 00:18:20.441 "data_offset": 2048, 00:18:20.441 "data_size": 63488 00:18:20.441 } 00:18:20.441 ] 00:18:20.441 }' 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.441 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.010 [2024-11-26 06:28:04.973021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.010 06:28:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.010 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.010 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.010 "name": "Existed_Raid", 00:18:21.010 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:21.010 "strip_size_kb": 64, 00:18:21.010 "state": "configuring", 00:18:21.010 "raid_level": "raid5f", 00:18:21.010 "superblock": true, 00:18:21.010 "num_base_bdevs": 3, 00:18:21.010 "num_base_bdevs_discovered": 1, 00:18:21.010 "num_base_bdevs_operational": 3, 00:18:21.010 "base_bdevs_list": [ 00:18:21.010 { 00:18:21.010 "name": "BaseBdev1", 00:18:21.010 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:21.010 "is_configured": true, 00:18:21.010 "data_offset": 2048, 00:18:21.010 "data_size": 63488 00:18:21.010 }, 00:18:21.010 { 00:18:21.010 "name": null, 00:18:21.010 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:21.010 "is_configured": false, 00:18:21.010 "data_offset": 0, 00:18:21.010 "data_size": 63488 00:18:21.010 }, 00:18:21.010 { 00:18:21.010 "name": null, 00:18:21.010 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:21.010 "is_configured": false, 00:18:21.010 "data_offset": 0, 00:18:21.010 "data_size": 63488 00:18:21.010 } 00:18:21.010 ] 00:18:21.010 }' 00:18:21.010 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.010 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.270 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.270 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.270 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.270 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.270 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.560 [2024-11-26 06:28:05.412346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.560 "name": "Existed_Raid", 00:18:21.560 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:21.560 "strip_size_kb": 64, 00:18:21.560 "state": "configuring", 00:18:21.560 "raid_level": "raid5f", 00:18:21.560 "superblock": true, 00:18:21.560 "num_base_bdevs": 3, 00:18:21.560 "num_base_bdevs_discovered": 2, 00:18:21.560 "num_base_bdevs_operational": 3, 00:18:21.560 "base_bdevs_list": [ 00:18:21.560 { 00:18:21.560 "name": "BaseBdev1", 00:18:21.560 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:21.560 "is_configured": true, 00:18:21.560 "data_offset": 2048, 00:18:21.560 "data_size": 63488 00:18:21.560 }, 00:18:21.560 { 00:18:21.560 "name": null, 00:18:21.560 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:21.560 "is_configured": false, 00:18:21.560 "data_offset": 0, 00:18:21.560 "data_size": 63488 00:18:21.560 }, 00:18:21.560 { 00:18:21.560 "name": "BaseBdev3", 00:18:21.560 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:21.560 "is_configured": true, 00:18:21.560 "data_offset": 2048, 00:18:21.560 "data_size": 63488 00:18:21.560 } 00:18:21.560 ] 00:18:21.560 }' 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.560 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.819 06:28:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.819 [2024-11-26 06:28:05.939490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.078 "name": "Existed_Raid", 00:18:22.078 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:22.078 "strip_size_kb": 64, 00:18:22.078 "state": "configuring", 00:18:22.078 "raid_level": "raid5f", 00:18:22.078 "superblock": true, 00:18:22.078 "num_base_bdevs": 3, 00:18:22.078 "num_base_bdevs_discovered": 1, 00:18:22.078 "num_base_bdevs_operational": 3, 00:18:22.078 "base_bdevs_list": [ 00:18:22.078 { 00:18:22.078 "name": null, 00:18:22.078 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:22.078 "is_configured": false, 00:18:22.078 "data_offset": 0, 00:18:22.078 "data_size": 63488 00:18:22.078 }, 00:18:22.078 { 00:18:22.078 "name": null, 00:18:22.078 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:22.078 "is_configured": false, 00:18:22.078 "data_offset": 0, 00:18:22.078 "data_size": 63488 00:18:22.078 }, 00:18:22.078 { 00:18:22.078 "name": "BaseBdev3", 00:18:22.078 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:22.078 "is_configured": true, 00:18:22.078 "data_offset": 2048, 00:18:22.078 "data_size": 63488 00:18:22.078 } 00:18:22.078 ] 00:18:22.078 }' 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.078 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.646 [2024-11-26 06:28:06.556783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.646 "name": "Existed_Raid", 00:18:22.646 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:22.646 "strip_size_kb": 64, 00:18:22.646 "state": "configuring", 00:18:22.646 "raid_level": "raid5f", 00:18:22.646 "superblock": true, 00:18:22.646 "num_base_bdevs": 3, 00:18:22.646 "num_base_bdevs_discovered": 2, 00:18:22.646 "num_base_bdevs_operational": 3, 00:18:22.646 "base_bdevs_list": [ 00:18:22.646 { 00:18:22.646 "name": null, 00:18:22.646 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:22.646 "is_configured": false, 00:18:22.646 "data_offset": 0, 00:18:22.646 "data_size": 63488 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "name": "BaseBdev2", 00:18:22.646 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:22.646 "is_configured": true, 00:18:22.646 "data_offset": 2048, 00:18:22.646 "data_size": 63488 00:18:22.646 }, 00:18:22.646 { 00:18:22.646 "name": "BaseBdev3", 00:18:22.646 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:22.646 "is_configured": true, 00:18:22.646 "data_offset": 2048, 00:18:22.646 "data_size": 63488 00:18:22.646 } 00:18:22.646 ] 00:18:22.646 }' 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.646 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.906 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.906 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.906 06:28:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.906 06:28:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:22.907 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.907 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:22.907 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.907 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.907 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:22.907 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 41660f7d-a973-4378-9543-e74bea2842f2 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.167 [2024-11-26 06:28:07.129807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:23.167 [2024-11-26 06:28:07.130215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:23.167 [2024-11-26 06:28:07.130280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:23.167 [2024-11-26 06:28:07.130637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:23.167 NewBaseBdev 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.167 [2024-11-26 06:28:07.136667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:23.167 [2024-11-26 06:28:07.136689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:23.167 [2024-11-26 06:28:07.136908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.167 [ 00:18:23.167 { 00:18:23.167 "name": "NewBaseBdev", 00:18:23.167 "aliases": [ 00:18:23.167 "41660f7d-a973-4378-9543-e74bea2842f2" 00:18:23.167 ], 00:18:23.167 "product_name": "Malloc disk", 00:18:23.167 "block_size": 512, 00:18:23.167 "num_blocks": 65536, 00:18:23.167 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:23.167 "assigned_rate_limits": { 00:18:23.167 "rw_ios_per_sec": 0, 00:18:23.167 "rw_mbytes_per_sec": 0, 00:18:23.167 "r_mbytes_per_sec": 0, 00:18:23.167 "w_mbytes_per_sec": 0 00:18:23.167 }, 00:18:23.167 "claimed": true, 00:18:23.167 "claim_type": "exclusive_write", 00:18:23.167 "zoned": false, 00:18:23.167 "supported_io_types": { 00:18:23.167 "read": true, 00:18:23.167 "write": true, 00:18:23.167 "unmap": true, 00:18:23.167 "flush": true, 00:18:23.167 "reset": true, 00:18:23.167 "nvme_admin": false, 00:18:23.167 "nvme_io": false, 00:18:23.167 "nvme_io_md": false, 00:18:23.167 "write_zeroes": true, 00:18:23.167 "zcopy": true, 00:18:23.167 "get_zone_info": false, 00:18:23.167 "zone_management": false, 00:18:23.167 "zone_append": false, 00:18:23.167 "compare": false, 00:18:23.167 "compare_and_write": false, 00:18:23.167 "abort": true, 00:18:23.167 "seek_hole": false, 00:18:23.167 "seek_data": false, 00:18:23.167 "copy": true, 00:18:23.167 "nvme_iov_md": false 00:18:23.167 }, 00:18:23.167 "memory_domains": [ 00:18:23.167 { 00:18:23.167 "dma_device_id": "system", 00:18:23.167 "dma_device_type": 1 00:18:23.167 }, 00:18:23.167 { 00:18:23.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.167 "dma_device_type": 2 00:18:23.167 } 00:18:23.167 ], 00:18:23.167 "driver_specific": {} 00:18:23.167 } 00:18:23.167 ] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.167 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.167 "name": "Existed_Raid", 00:18:23.167 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:23.167 "strip_size_kb": 64, 00:18:23.167 "state": "online", 00:18:23.167 "raid_level": "raid5f", 00:18:23.167 "superblock": true, 00:18:23.167 "num_base_bdevs": 3, 00:18:23.167 "num_base_bdevs_discovered": 3, 00:18:23.167 "num_base_bdevs_operational": 3, 00:18:23.167 "base_bdevs_list": [ 00:18:23.167 { 00:18:23.167 "name": "NewBaseBdev", 00:18:23.167 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:23.167 "is_configured": true, 00:18:23.167 "data_offset": 2048, 00:18:23.167 "data_size": 63488 00:18:23.167 }, 00:18:23.167 { 00:18:23.167 "name": "BaseBdev2", 00:18:23.167 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:23.167 "is_configured": true, 00:18:23.167 "data_offset": 2048, 00:18:23.168 "data_size": 63488 00:18:23.168 }, 00:18:23.168 { 00:18:23.168 "name": "BaseBdev3", 00:18:23.168 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:23.168 "is_configured": true, 00:18:23.168 "data_offset": 2048, 00:18:23.168 "data_size": 63488 00:18:23.168 } 00:18:23.168 ] 00:18:23.168 }' 00:18:23.168 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.168 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 [2024-11-26 06:28:07.623942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.737 "name": "Existed_Raid", 00:18:23.737 "aliases": [ 00:18:23.737 "31a2560d-f7dc-493f-8c57-5e0fb51753b7" 00:18:23.737 ], 00:18:23.737 "product_name": "Raid Volume", 00:18:23.737 "block_size": 512, 00:18:23.737 "num_blocks": 126976, 00:18:23.737 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:23.737 "assigned_rate_limits": { 00:18:23.737 "rw_ios_per_sec": 0, 00:18:23.737 "rw_mbytes_per_sec": 0, 00:18:23.737 "r_mbytes_per_sec": 0, 00:18:23.737 "w_mbytes_per_sec": 0 00:18:23.737 }, 00:18:23.737 "claimed": false, 00:18:23.737 "zoned": false, 00:18:23.737 "supported_io_types": { 00:18:23.737 "read": true, 00:18:23.737 "write": true, 00:18:23.737 "unmap": false, 00:18:23.737 "flush": false, 00:18:23.737 "reset": true, 00:18:23.737 "nvme_admin": false, 00:18:23.737 "nvme_io": false, 00:18:23.737 "nvme_io_md": false, 00:18:23.737 "write_zeroes": true, 00:18:23.737 "zcopy": false, 00:18:23.737 "get_zone_info": false, 00:18:23.737 "zone_management": false, 00:18:23.737 "zone_append": false, 00:18:23.737 "compare": false, 00:18:23.737 "compare_and_write": false, 00:18:23.737 "abort": false, 00:18:23.737 "seek_hole": false, 00:18:23.737 "seek_data": false, 00:18:23.737 "copy": false, 00:18:23.737 "nvme_iov_md": false 00:18:23.737 }, 00:18:23.737 "driver_specific": { 00:18:23.737 "raid": { 00:18:23.737 "uuid": "31a2560d-f7dc-493f-8c57-5e0fb51753b7", 00:18:23.737 "strip_size_kb": 64, 00:18:23.737 "state": "online", 00:18:23.737 "raid_level": "raid5f", 00:18:23.737 "superblock": true, 00:18:23.737 "num_base_bdevs": 3, 00:18:23.737 "num_base_bdevs_discovered": 3, 00:18:23.737 "num_base_bdevs_operational": 3, 00:18:23.737 "base_bdevs_list": [ 00:18:23.737 { 00:18:23.737 "name": "NewBaseBdev", 00:18:23.737 "uuid": "41660f7d-a973-4378-9543-e74bea2842f2", 00:18:23.737 "is_configured": true, 00:18:23.737 "data_offset": 2048, 00:18:23.737 "data_size": 63488 00:18:23.737 }, 00:18:23.737 { 00:18:23.737 "name": "BaseBdev2", 00:18:23.737 "uuid": "e0f6ab0a-9857-47d0-9476-8bf1deea815b", 00:18:23.737 "is_configured": true, 00:18:23.737 "data_offset": 2048, 00:18:23.737 "data_size": 63488 00:18:23.737 }, 00:18:23.737 { 00:18:23.737 "name": "BaseBdev3", 00:18:23.737 "uuid": "be2bd2df-864c-4abc-897c-dd364fe1d964", 00:18:23.737 "is_configured": true, 00:18:23.737 "data_offset": 2048, 00:18:23.737 "data_size": 63488 00:18:23.737 } 00:18:23.737 ] 00:18:23.737 } 00:18:23.737 } 00:18:23.737 }' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:23.737 BaseBdev2 00:18:23.737 BaseBdev3' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.737 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.738 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.997 [2024-11-26 06:28:07.915303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.997 [2024-11-26 06:28:07.915402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.997 [2024-11-26 06:28:07.915549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.997 [2024-11-26 06:28:07.915975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.997 [2024-11-26 06:28:07.916048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81069 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81069 ']' 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 81069 00:18:23.997 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81069 00:18:23.998 killing process with pid 81069 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81069' 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 81069 00:18:23.998 [2024-11-26 06:28:07.966812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.998 06:28:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 81069 00:18:24.257 [2024-11-26 06:28:08.312744] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.728 06:28:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:25.728 00:18:25.728 real 0m11.034s 00:18:25.728 user 0m17.105s 00:18:25.728 sys 0m2.210s 00:18:25.728 06:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.728 06:28:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.728 ************************************ 00:18:25.728 END TEST raid5f_state_function_test_sb 00:18:25.728 ************************************ 00:18:25.728 06:28:09 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:25.728 06:28:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:25.728 06:28:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.728 06:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.728 ************************************ 00:18:25.728 START TEST raid5f_superblock_test 00:18:25.728 ************************************ 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81697 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81697 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81697 ']' 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.728 06:28:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.728 [2024-11-26 06:28:09.746994] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:18:25.728 [2024-11-26 06:28:09.747161] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81697 ] 00:18:25.987 [2024-11-26 06:28:09.927966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.987 [2024-11-26 06:28:10.070771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.247 [2024-11-26 06:28:10.320306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.247 [2024-11-26 06:28:10.320354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.507 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.767 malloc1 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.767 [2024-11-26 06:28:10.656273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:26.767 [2024-11-26 06:28:10.656402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.767 [2024-11-26 06:28:10.656449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.767 [2024-11-26 06:28:10.656485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.767 [2024-11-26 06:28:10.659094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.767 [2024-11-26 06:28:10.659162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:26.767 pt1 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.767 malloc2 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.767 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.768 [2024-11-26 06:28:10.721129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.768 [2024-11-26 06:28:10.721226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.768 [2024-11-26 06:28:10.721286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.768 [2024-11-26 06:28:10.721316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.768 [2024-11-26 06:28:10.723713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.768 [2024-11-26 06:28:10.723779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.768 pt2 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.768 malloc3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.768 [2024-11-26 06:28:10.798230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:26.768 [2024-11-26 06:28:10.798323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.768 [2024-11-26 06:28:10.798364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:26.768 [2024-11-26 06:28:10.798393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.768 [2024-11-26 06:28:10.800888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.768 [2024-11-26 06:28:10.800962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:26.768 pt3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.768 [2024-11-26 06:28:10.810291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:26.768 [2024-11-26 06:28:10.812458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.768 [2024-11-26 06:28:10.812524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:26.768 [2024-11-26 06:28:10.812702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.768 [2024-11-26 06:28:10.812721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:26.768 [2024-11-26 06:28:10.812970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:26.768 [2024-11-26 06:28:10.818996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.768 [2024-11-26 06:28:10.819048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:26.768 [2024-11-26 06:28:10.819297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.768 "name": "raid_bdev1", 00:18:26.768 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:26.768 "strip_size_kb": 64, 00:18:26.768 "state": "online", 00:18:26.768 "raid_level": "raid5f", 00:18:26.768 "superblock": true, 00:18:26.768 "num_base_bdevs": 3, 00:18:26.768 "num_base_bdevs_discovered": 3, 00:18:26.768 "num_base_bdevs_operational": 3, 00:18:26.768 "base_bdevs_list": [ 00:18:26.768 { 00:18:26.768 "name": "pt1", 00:18:26.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.768 "is_configured": true, 00:18:26.768 "data_offset": 2048, 00:18:26.768 "data_size": 63488 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "name": "pt2", 00:18:26.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.768 "is_configured": true, 00:18:26.768 "data_offset": 2048, 00:18:26.768 "data_size": 63488 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "name": "pt3", 00:18:26.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:26.768 "is_configured": true, 00:18:26.768 "data_offset": 2048, 00:18:26.768 "data_size": 63488 00:18:26.768 } 00:18:26.768 ] 00:18:26.768 }' 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.768 06:28:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.337 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.337 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:27.337 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.337 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.338 [2024-11-26 06:28:11.286191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.338 "name": "raid_bdev1", 00:18:27.338 "aliases": [ 00:18:27.338 "2edc14e1-2955-48ba-9370-5714022a691b" 00:18:27.338 ], 00:18:27.338 "product_name": "Raid Volume", 00:18:27.338 "block_size": 512, 00:18:27.338 "num_blocks": 126976, 00:18:27.338 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:27.338 "assigned_rate_limits": { 00:18:27.338 "rw_ios_per_sec": 0, 00:18:27.338 "rw_mbytes_per_sec": 0, 00:18:27.338 "r_mbytes_per_sec": 0, 00:18:27.338 "w_mbytes_per_sec": 0 00:18:27.338 }, 00:18:27.338 "claimed": false, 00:18:27.338 "zoned": false, 00:18:27.338 "supported_io_types": { 00:18:27.338 "read": true, 00:18:27.338 "write": true, 00:18:27.338 "unmap": false, 00:18:27.338 "flush": false, 00:18:27.338 "reset": true, 00:18:27.338 "nvme_admin": false, 00:18:27.338 "nvme_io": false, 00:18:27.338 "nvme_io_md": false, 00:18:27.338 "write_zeroes": true, 00:18:27.338 "zcopy": false, 00:18:27.338 "get_zone_info": false, 00:18:27.338 "zone_management": false, 00:18:27.338 "zone_append": false, 00:18:27.338 "compare": false, 00:18:27.338 "compare_and_write": false, 00:18:27.338 "abort": false, 00:18:27.338 "seek_hole": false, 00:18:27.338 "seek_data": false, 00:18:27.338 "copy": false, 00:18:27.338 "nvme_iov_md": false 00:18:27.338 }, 00:18:27.338 "driver_specific": { 00:18:27.338 "raid": { 00:18:27.338 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:27.338 "strip_size_kb": 64, 00:18:27.338 "state": "online", 00:18:27.338 "raid_level": "raid5f", 00:18:27.338 "superblock": true, 00:18:27.338 "num_base_bdevs": 3, 00:18:27.338 "num_base_bdevs_discovered": 3, 00:18:27.338 "num_base_bdevs_operational": 3, 00:18:27.338 "base_bdevs_list": [ 00:18:27.338 { 00:18:27.338 "name": "pt1", 00:18:27.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.338 "is_configured": true, 00:18:27.338 "data_offset": 2048, 00:18:27.338 "data_size": 63488 00:18:27.338 }, 00:18:27.338 { 00:18:27.338 "name": "pt2", 00:18:27.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.338 "is_configured": true, 00:18:27.338 "data_offset": 2048, 00:18:27.338 "data_size": 63488 00:18:27.338 }, 00:18:27.338 { 00:18:27.338 "name": "pt3", 00:18:27.338 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.338 "is_configured": true, 00:18:27.338 "data_offset": 2048, 00:18:27.338 "data_size": 63488 00:18:27.338 } 00:18:27.338 ] 00:18:27.338 } 00:18:27.338 } 00:18:27.338 }' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:27.338 pt2 00:18:27.338 pt3' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.338 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.597 [2024-11-26 06:28:11.585582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2edc14e1-2955-48ba-9370-5714022a691b 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2edc14e1-2955-48ba-9370-5714022a691b ']' 00:18:27.597 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.598 [2024-11-26 06:28:11.633300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.598 [2024-11-26 06:28:11.633382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.598 [2024-11-26 06:28:11.633535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.598 [2024-11-26 06:28:11.633669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.598 [2024-11-26 06:28:11.633722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.598 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.855 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.855 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:27.855 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.856 [2024-11-26 06:28:11.781093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:27.856 [2024-11-26 06:28:11.783317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:27.856 [2024-11-26 06:28:11.783433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:27.856 [2024-11-26 06:28:11.783509] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:27.856 [2024-11-26 06:28:11.783564] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:27.856 [2024-11-26 06:28:11.783584] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:27.856 [2024-11-26 06:28:11.783602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.856 [2024-11-26 06:28:11.783612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:27.856 request: 00:18:27.856 { 00:18:27.856 "name": "raid_bdev1", 00:18:27.856 "raid_level": "raid5f", 00:18:27.856 "base_bdevs": [ 00:18:27.856 "malloc1", 00:18:27.856 "malloc2", 00:18:27.856 "malloc3" 00:18:27.856 ], 00:18:27.856 "strip_size_kb": 64, 00:18:27.856 "superblock": false, 00:18:27.856 "method": "bdev_raid_create", 00:18:27.856 "req_id": 1 00:18:27.856 } 00:18:27.856 Got JSON-RPC error response 00:18:27.856 response: 00:18:27.856 { 00:18:27.856 "code": -17, 00:18:27.856 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:27.856 } 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.856 [2024-11-26 06:28:11.844896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:27.856 [2024-11-26 06:28:11.845003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.856 [2024-11-26 06:28:11.845063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:27.856 [2024-11-26 06:28:11.845107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.856 [2024-11-26 06:28:11.847680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.856 [2024-11-26 06:28:11.847752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:27.856 [2024-11-26 06:28:11.847875] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:27.856 [2024-11-26 06:28:11.847964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.856 pt1 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.856 "name": "raid_bdev1", 00:18:27.856 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:27.856 "strip_size_kb": 64, 00:18:27.856 "state": "configuring", 00:18:27.856 "raid_level": "raid5f", 00:18:27.856 "superblock": true, 00:18:27.856 "num_base_bdevs": 3, 00:18:27.856 "num_base_bdevs_discovered": 1, 00:18:27.856 "num_base_bdevs_operational": 3, 00:18:27.856 "base_bdevs_list": [ 00:18:27.856 { 00:18:27.856 "name": "pt1", 00:18:27.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.856 "is_configured": true, 00:18:27.856 "data_offset": 2048, 00:18:27.856 "data_size": 63488 00:18:27.856 }, 00:18:27.856 { 00:18:27.856 "name": null, 00:18:27.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.856 "is_configured": false, 00:18:27.856 "data_offset": 2048, 00:18:27.856 "data_size": 63488 00:18:27.856 }, 00:18:27.856 { 00:18:27.856 "name": null, 00:18:27.856 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:27.856 "is_configured": false, 00:18:27.856 "data_offset": 2048, 00:18:27.856 "data_size": 63488 00:18:27.856 } 00:18:27.856 ] 00:18:27.856 }' 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.856 06:28:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.423 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:28.423 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.423 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.423 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.423 [2024-11-26 06:28:12.304238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.423 [2024-11-26 06:28:12.304318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.423 [2024-11-26 06:28:12.304347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:28.423 [2024-11-26 06:28:12.304357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.423 [2024-11-26 06:28:12.304907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.423 [2024-11-26 06:28:12.304935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.424 [2024-11-26 06:28:12.305041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:28.424 [2024-11-26 06:28:12.305082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.424 pt2 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.424 [2024-11-26 06:28:12.316228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.424 "name": "raid_bdev1", 00:18:28.424 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:28.424 "strip_size_kb": 64, 00:18:28.424 "state": "configuring", 00:18:28.424 "raid_level": "raid5f", 00:18:28.424 "superblock": true, 00:18:28.424 "num_base_bdevs": 3, 00:18:28.424 "num_base_bdevs_discovered": 1, 00:18:28.424 "num_base_bdevs_operational": 3, 00:18:28.424 "base_bdevs_list": [ 00:18:28.424 { 00:18:28.424 "name": "pt1", 00:18:28.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.424 "is_configured": true, 00:18:28.424 "data_offset": 2048, 00:18:28.424 "data_size": 63488 00:18:28.424 }, 00:18:28.424 { 00:18:28.424 "name": null, 00:18:28.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.424 "is_configured": false, 00:18:28.424 "data_offset": 0, 00:18:28.424 "data_size": 63488 00:18:28.424 }, 00:18:28.424 { 00:18:28.424 "name": null, 00:18:28.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.424 "is_configured": false, 00:18:28.424 "data_offset": 2048, 00:18:28.424 "data_size": 63488 00:18:28.424 } 00:18:28.424 ] 00:18:28.424 }' 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.424 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.683 [2024-11-26 06:28:12.739454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.683 [2024-11-26 06:28:12.739768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.683 [2024-11-26 06:28:12.739902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:28.683 [2024-11-26 06:28:12.740016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.683 [2024-11-26 06:28:12.740721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.683 [2024-11-26 06:28:12.740883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.683 [2024-11-26 06:28:12.741094] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:28.683 [2024-11-26 06:28:12.741176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.683 pt2 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:28.683 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.684 [2024-11-26 06:28:12.751414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:28.684 [2024-11-26 06:28:12.751603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.684 [2024-11-26 06:28:12.751709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:28.684 [2024-11-26 06:28:12.751833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.684 [2024-11-26 06:28:12.752431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.684 [2024-11-26 06:28:12.752574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:28.684 [2024-11-26 06:28:12.752745] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:28.684 [2024-11-26 06:28:12.752812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:28.684 [2024-11-26 06:28:12.753021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:28.684 [2024-11-26 06:28:12.753079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:28.684 [2024-11-26 06:28:12.753442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:28.684 [2024-11-26 06:28:12.760149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:28.684 [2024-11-26 06:28:12.760212] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:28.684 [2024-11-26 06:28:12.760541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.684 pt3 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.684 "name": "raid_bdev1", 00:18:28.684 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:28.684 "strip_size_kb": 64, 00:18:28.684 "state": "online", 00:18:28.684 "raid_level": "raid5f", 00:18:28.684 "superblock": true, 00:18:28.684 "num_base_bdevs": 3, 00:18:28.684 "num_base_bdevs_discovered": 3, 00:18:28.684 "num_base_bdevs_operational": 3, 00:18:28.684 "base_bdevs_list": [ 00:18:28.684 { 00:18:28.684 "name": "pt1", 00:18:28.684 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:28.684 "is_configured": true, 00:18:28.684 "data_offset": 2048, 00:18:28.684 "data_size": 63488 00:18:28.684 }, 00:18:28.684 { 00:18:28.684 "name": "pt2", 00:18:28.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.684 "is_configured": true, 00:18:28.684 "data_offset": 2048, 00:18:28.684 "data_size": 63488 00:18:28.684 }, 00:18:28.684 { 00:18:28.684 "name": "pt3", 00:18:28.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:28.684 "is_configured": true, 00:18:28.684 "data_offset": 2048, 00:18:28.684 "data_size": 63488 00:18:28.684 } 00:18:28.684 ] 00:18:28.684 }' 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.684 06:28:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.253 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:29.254 [2024-11-26 06:28:13.204520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.254 "name": "raid_bdev1", 00:18:29.254 "aliases": [ 00:18:29.254 "2edc14e1-2955-48ba-9370-5714022a691b" 00:18:29.254 ], 00:18:29.254 "product_name": "Raid Volume", 00:18:29.254 "block_size": 512, 00:18:29.254 "num_blocks": 126976, 00:18:29.254 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:29.254 "assigned_rate_limits": { 00:18:29.254 "rw_ios_per_sec": 0, 00:18:29.254 "rw_mbytes_per_sec": 0, 00:18:29.254 "r_mbytes_per_sec": 0, 00:18:29.254 "w_mbytes_per_sec": 0 00:18:29.254 }, 00:18:29.254 "claimed": false, 00:18:29.254 "zoned": false, 00:18:29.254 "supported_io_types": { 00:18:29.254 "read": true, 00:18:29.254 "write": true, 00:18:29.254 "unmap": false, 00:18:29.254 "flush": false, 00:18:29.254 "reset": true, 00:18:29.254 "nvme_admin": false, 00:18:29.254 "nvme_io": false, 00:18:29.254 "nvme_io_md": false, 00:18:29.254 "write_zeroes": true, 00:18:29.254 "zcopy": false, 00:18:29.254 "get_zone_info": false, 00:18:29.254 "zone_management": false, 00:18:29.254 "zone_append": false, 00:18:29.254 "compare": false, 00:18:29.254 "compare_and_write": false, 00:18:29.254 "abort": false, 00:18:29.254 "seek_hole": false, 00:18:29.254 "seek_data": false, 00:18:29.254 "copy": false, 00:18:29.254 "nvme_iov_md": false 00:18:29.254 }, 00:18:29.254 "driver_specific": { 00:18:29.254 "raid": { 00:18:29.254 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:29.254 "strip_size_kb": 64, 00:18:29.254 "state": "online", 00:18:29.254 "raid_level": "raid5f", 00:18:29.254 "superblock": true, 00:18:29.254 "num_base_bdevs": 3, 00:18:29.254 "num_base_bdevs_discovered": 3, 00:18:29.254 "num_base_bdevs_operational": 3, 00:18:29.254 "base_bdevs_list": [ 00:18:29.254 { 00:18:29.254 "name": "pt1", 00:18:29.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.254 "is_configured": true, 00:18:29.254 "data_offset": 2048, 00:18:29.254 "data_size": 63488 00:18:29.254 }, 00:18:29.254 { 00:18:29.254 "name": "pt2", 00:18:29.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.254 "is_configured": true, 00:18:29.254 "data_offset": 2048, 00:18:29.254 "data_size": 63488 00:18:29.254 }, 00:18:29.254 { 00:18:29.254 "name": "pt3", 00:18:29.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.254 "is_configured": true, 00:18:29.254 "data_offset": 2048, 00:18:29.254 "data_size": 63488 00:18:29.254 } 00:18:29.254 ] 00:18:29.254 } 00:18:29.254 } 00:18:29.254 }' 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:29.254 pt2 00:18:29.254 pt3' 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.254 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.514 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:29.515 [2024-11-26 06:28:13.507898] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2edc14e1-2955-48ba-9370-5714022a691b '!=' 2edc14e1-2955-48ba-9370-5714022a691b ']' 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.515 [2024-11-26 06:28:13.555668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.515 "name": "raid_bdev1", 00:18:29.515 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:29.515 "strip_size_kb": 64, 00:18:29.515 "state": "online", 00:18:29.515 "raid_level": "raid5f", 00:18:29.515 "superblock": true, 00:18:29.515 "num_base_bdevs": 3, 00:18:29.515 "num_base_bdevs_discovered": 2, 00:18:29.515 "num_base_bdevs_operational": 2, 00:18:29.515 "base_bdevs_list": [ 00:18:29.515 { 00:18:29.515 "name": null, 00:18:29.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.515 "is_configured": false, 00:18:29.515 "data_offset": 0, 00:18:29.515 "data_size": 63488 00:18:29.515 }, 00:18:29.515 { 00:18:29.515 "name": "pt2", 00:18:29.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.515 "is_configured": true, 00:18:29.515 "data_offset": 2048, 00:18:29.515 "data_size": 63488 00:18:29.515 }, 00:18:29.515 { 00:18:29.515 "name": "pt3", 00:18:29.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.515 "is_configured": true, 00:18:29.515 "data_offset": 2048, 00:18:29.515 "data_size": 63488 00:18:29.515 } 00:18:29.515 ] 00:18:29.515 }' 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.515 06:28:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 [2024-11-26 06:28:14.054833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.086 [2024-11-26 06:28:14.054927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.086 [2024-11-26 06:28:14.055081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.086 [2024-11-26 06:28:14.055153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.086 [2024-11-26 06:28:14.055182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 [2024-11-26 06:28:14.126629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:30.086 [2024-11-26 06:28:14.127160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.086 [2024-11-26 06:28:14.127273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:30.086 [2024-11-26 06:28:14.127366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.086 [2024-11-26 06:28:14.130358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.086 [2024-11-26 06:28:14.130550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:30.086 [2024-11-26 06:28:14.130733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:30.086 [2024-11-26 06:28:14.130801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:30.086 pt2 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.086 "name": "raid_bdev1", 00:18:30.086 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:30.086 "strip_size_kb": 64, 00:18:30.086 "state": "configuring", 00:18:30.086 "raid_level": "raid5f", 00:18:30.086 "superblock": true, 00:18:30.086 "num_base_bdevs": 3, 00:18:30.086 "num_base_bdevs_discovered": 1, 00:18:30.086 "num_base_bdevs_operational": 2, 00:18:30.086 "base_bdevs_list": [ 00:18:30.086 { 00:18:30.086 "name": null, 00:18:30.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.086 "is_configured": false, 00:18:30.086 "data_offset": 2048, 00:18:30.086 "data_size": 63488 00:18:30.086 }, 00:18:30.086 { 00:18:30.086 "name": "pt2", 00:18:30.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.086 "is_configured": true, 00:18:30.086 "data_offset": 2048, 00:18:30.086 "data_size": 63488 00:18:30.086 }, 00:18:30.086 { 00:18:30.086 "name": null, 00:18:30.086 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.086 "is_configured": false, 00:18:30.086 "data_offset": 2048, 00:18:30.086 "data_size": 63488 00:18:30.086 } 00:18:30.086 ] 00:18:30.086 }' 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.086 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.655 [2024-11-26 06:28:14.562148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:30.655 [2024-11-26 06:28:14.562522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.655 [2024-11-26 06:28:14.562649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:30.655 [2024-11-26 06:28:14.562696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.655 [2024-11-26 06:28:14.563382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.655 [2024-11-26 06:28:14.563614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:30.655 [2024-11-26 06:28:14.563805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:30.655 [2024-11-26 06:28:14.563895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:30.655 [2024-11-26 06:28:14.564085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:30.655 [2024-11-26 06:28:14.564129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:30.655 [2024-11-26 06:28:14.564488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:30.655 [2024-11-26 06:28:14.570232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:30.655 [2024-11-26 06:28:14.570288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:30.655 [2024-11-26 06:28:14.570708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.655 pt3 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.655 "name": "raid_bdev1", 00:18:30.655 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:30.655 "strip_size_kb": 64, 00:18:30.655 "state": "online", 00:18:30.655 "raid_level": "raid5f", 00:18:30.655 "superblock": true, 00:18:30.655 "num_base_bdevs": 3, 00:18:30.655 "num_base_bdevs_discovered": 2, 00:18:30.655 "num_base_bdevs_operational": 2, 00:18:30.655 "base_bdevs_list": [ 00:18:30.655 { 00:18:30.655 "name": null, 00:18:30.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.655 "is_configured": false, 00:18:30.655 "data_offset": 2048, 00:18:30.655 "data_size": 63488 00:18:30.655 }, 00:18:30.655 { 00:18:30.655 "name": "pt2", 00:18:30.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.655 "is_configured": true, 00:18:30.655 "data_offset": 2048, 00:18:30.655 "data_size": 63488 00:18:30.655 }, 00:18:30.655 { 00:18:30.655 "name": "pt3", 00:18:30.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.655 "is_configured": true, 00:18:30.655 "data_offset": 2048, 00:18:30.655 "data_size": 63488 00:18:30.655 } 00:18:30.655 ] 00:18:30.655 }' 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.655 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.915 [2024-11-26 06:28:14.987016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:30.915 [2024-11-26 06:28:14.987062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.915 [2024-11-26 06:28:14.987168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.915 [2024-11-26 06:28:14.987241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:30.915 [2024-11-26 06:28:14.987252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.915 06:28:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.915 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.175 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.175 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:31.175 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.175 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.175 [2024-11-26 06:28:15.062910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:31.175 [2024-11-26 06:28:15.063232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.175 [2024-11-26 06:28:15.063268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:31.176 [2024-11-26 06:28:15.063280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.176 [2024-11-26 06:28:15.066880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.176 [2024-11-26 06:28:15.067042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:31.176 [2024-11-26 06:28:15.067262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:31.176 [2024-11-26 06:28:15.067337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:31.176 [2024-11-26 06:28:15.067577] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:31.176 [2024-11-26 06:28:15.067593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.176 [2024-11-26 06:28:15.067616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:31.176 [2024-11-26 06:28:15.067710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:31.176 pt1 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.176 "name": "raid_bdev1", 00:18:31.176 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:31.176 "strip_size_kb": 64, 00:18:31.176 "state": "configuring", 00:18:31.176 "raid_level": "raid5f", 00:18:31.176 "superblock": true, 00:18:31.176 "num_base_bdevs": 3, 00:18:31.176 "num_base_bdevs_discovered": 1, 00:18:31.176 "num_base_bdevs_operational": 2, 00:18:31.176 "base_bdevs_list": [ 00:18:31.176 { 00:18:31.176 "name": null, 00:18:31.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.176 "is_configured": false, 00:18:31.176 "data_offset": 2048, 00:18:31.176 "data_size": 63488 00:18:31.176 }, 00:18:31.176 { 00:18:31.176 "name": "pt2", 00:18:31.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.176 "is_configured": true, 00:18:31.176 "data_offset": 2048, 00:18:31.176 "data_size": 63488 00:18:31.176 }, 00:18:31.176 { 00:18:31.176 "name": null, 00:18:31.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.176 "is_configured": false, 00:18:31.176 "data_offset": 2048, 00:18:31.176 "data_size": 63488 00:18:31.176 } 00:18:31.176 ] 00:18:31.176 }' 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.176 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.436 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:31.436 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.436 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.436 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:31.436 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.697 [2024-11-26 06:28:15.578442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:31.697 [2024-11-26 06:28:15.578942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.697 [2024-11-26 06:28:15.579103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:31.697 [2024-11-26 06:28:15.579221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.697 [2024-11-26 06:28:15.579947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.697 [2024-11-26 06:28:15.580120] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:31.697 [2024-11-26 06:28:15.580293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:31.697 [2024-11-26 06:28:15.580428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:31.697 [2024-11-26 06:28:15.580653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:31.697 [2024-11-26 06:28:15.580698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:31.697 [2024-11-26 06:28:15.581098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:31.697 [2024-11-26 06:28:15.587366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:31.697 [2024-11-26 06:28:15.587436] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:31.697 [2024-11-26 06:28:15.587806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.697 pt3 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.697 "name": "raid_bdev1", 00:18:31.697 "uuid": "2edc14e1-2955-48ba-9370-5714022a691b", 00:18:31.697 "strip_size_kb": 64, 00:18:31.697 "state": "online", 00:18:31.697 "raid_level": "raid5f", 00:18:31.697 "superblock": true, 00:18:31.697 "num_base_bdevs": 3, 00:18:31.697 "num_base_bdevs_discovered": 2, 00:18:31.697 "num_base_bdevs_operational": 2, 00:18:31.697 "base_bdevs_list": [ 00:18:31.697 { 00:18:31.697 "name": null, 00:18:31.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.697 "is_configured": false, 00:18:31.697 "data_offset": 2048, 00:18:31.697 "data_size": 63488 00:18:31.697 }, 00:18:31.697 { 00:18:31.697 "name": "pt2", 00:18:31.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.697 "is_configured": true, 00:18:31.697 "data_offset": 2048, 00:18:31.697 "data_size": 63488 00:18:31.697 }, 00:18:31.697 { 00:18:31.697 "name": "pt3", 00:18:31.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:31.697 "is_configured": true, 00:18:31.697 "data_offset": 2048, 00:18:31.697 "data_size": 63488 00:18:31.697 } 00:18:31.697 ] 00:18:31.697 }' 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.697 06:28:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.000 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.259 [2024-11-26 06:28:16.131537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2edc14e1-2955-48ba-9370-5714022a691b '!=' 2edc14e1-2955-48ba-9370-5714022a691b ']' 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81697 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81697 ']' 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81697 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81697 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.259 killing process with pid 81697 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81697' 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81697 00:18:32.259 [2024-11-26 06:28:16.214097] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.259 [2024-11-26 06:28:16.214237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:32.259 06:28:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81697 00:18:32.259 [2024-11-26 06:28:16.214331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:32.259 [2024-11-26 06:28:16.214352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:32.518 [2024-11-26 06:28:16.560731] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:33.897 06:28:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:33.897 00:18:33.897 real 0m8.155s 00:18:33.897 user 0m12.499s 00:18:33.897 sys 0m1.606s 00:18:33.897 06:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.897 06:28:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.897 ************************************ 00:18:33.897 END TEST raid5f_superblock_test 00:18:33.898 ************************************ 00:18:33.898 06:28:17 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:33.898 06:28:17 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:33.898 06:28:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:33.898 06:28:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.898 06:28:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.898 ************************************ 00:18:33.898 START TEST raid5f_rebuild_test 00:18:33.898 ************************************ 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82143 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82143 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 82143 ']' 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.898 06:28:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:33.898 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:33.898 Zero copy mechanism will not be used. 00:18:33.898 [2024-11-26 06:28:17.980885] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:18:33.898 [2024-11-26 06:28:17.981005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82143 ] 00:18:34.158 [2024-11-26 06:28:18.156402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.417 [2024-11-26 06:28:18.300761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.676 [2024-11-26 06:28:18.555322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.676 [2024-11-26 06:28:18.555370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 BaseBdev1_malloc 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 [2024-11-26 06:28:18.877641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:34.935 [2024-11-26 06:28:18.877718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.935 [2024-11-26 06:28:18.877745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:34.935 [2024-11-26 06:28:18.877758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.935 [2024-11-26 06:28:18.880301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.935 [2024-11-26 06:28:18.880338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.935 BaseBdev1 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 BaseBdev2_malloc 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 [2024-11-26 06:28:18.940375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:34.935 [2024-11-26 06:28:18.940460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.935 [2024-11-26 06:28:18.940484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:34.935 [2024-11-26 06:28:18.940499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.935 [2024-11-26 06:28:18.943100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.935 [2024-11-26 06:28:18.943134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:34.935 BaseBdev2 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 BaseBdev3_malloc 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 [2024-11-26 06:28:19.010727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:34.935 [2024-11-26 06:28:19.010791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.935 [2024-11-26 06:28:19.010816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:34.935 [2024-11-26 06:28:19.010829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.935 [2024-11-26 06:28:19.013518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.935 [2024-11-26 06:28:19.013561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:34.935 BaseBdev3 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.196 spare_malloc 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.196 spare_delay 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.196 [2024-11-26 06:28:19.085913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:35.196 [2024-11-26 06:28:19.085990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.196 [2024-11-26 06:28:19.086011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:35.196 [2024-11-26 06:28:19.086023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.196 [2024-11-26 06:28:19.088777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.196 [2024-11-26 06:28:19.088822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:35.196 spare 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.196 [2024-11-26 06:28:19.097981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.196 [2024-11-26 06:28:19.100242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:35.196 [2024-11-26 06:28:19.100310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:35.196 [2024-11-26 06:28:19.100428] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:35.196 [2024-11-26 06:28:19.100441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:35.196 [2024-11-26 06:28:19.100761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:35.196 [2024-11-26 06:28:19.106889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:35.196 [2024-11-26 06:28:19.106917] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:35.196 [2024-11-26 06:28:19.107134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.196 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.197 "name": "raid_bdev1", 00:18:35.197 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:35.197 "strip_size_kb": 64, 00:18:35.197 "state": "online", 00:18:35.197 "raid_level": "raid5f", 00:18:35.197 "superblock": false, 00:18:35.197 "num_base_bdevs": 3, 00:18:35.197 "num_base_bdevs_discovered": 3, 00:18:35.197 "num_base_bdevs_operational": 3, 00:18:35.197 "base_bdevs_list": [ 00:18:35.197 { 00:18:35.197 "name": "BaseBdev1", 00:18:35.197 "uuid": "2027aa45-fff8-5d75-b60a-a2262567866e", 00:18:35.197 "is_configured": true, 00:18:35.197 "data_offset": 0, 00:18:35.197 "data_size": 65536 00:18:35.197 }, 00:18:35.197 { 00:18:35.197 "name": "BaseBdev2", 00:18:35.197 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:35.197 "is_configured": true, 00:18:35.197 "data_offset": 0, 00:18:35.197 "data_size": 65536 00:18:35.197 }, 00:18:35.197 { 00:18:35.197 "name": "BaseBdev3", 00:18:35.197 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:35.197 "is_configured": true, 00:18:35.197 "data_offset": 0, 00:18:35.197 "data_size": 65536 00:18:35.197 } 00:18:35.197 ] 00:18:35.197 }' 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.197 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.457 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:35.457 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:35.457 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.457 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.457 [2024-11-26 06:28:19.582780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:35.716 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:35.976 [2024-11-26 06:28:19.854165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:35.976 /dev/nbd0 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:35.976 1+0 records in 00:18:35.976 1+0 records out 00:18:35.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343001 s, 11.9 MB/s 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:35.976 06:28:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:36.236 512+0 records in 00:18:36.236 512+0 records out 00:18:36.236 67108864 bytes (67 MB, 64 MiB) copied, 0.366795 s, 183 MB/s 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:36.236 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:36.496 [2024-11-26 06:28:20.514639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.496 [2024-11-26 06:28:20.532546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.496 "name": "raid_bdev1", 00:18:36.496 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:36.496 "strip_size_kb": 64, 00:18:36.496 "state": "online", 00:18:36.496 "raid_level": "raid5f", 00:18:36.496 "superblock": false, 00:18:36.496 "num_base_bdevs": 3, 00:18:36.496 "num_base_bdevs_discovered": 2, 00:18:36.496 "num_base_bdevs_operational": 2, 00:18:36.496 "base_bdevs_list": [ 00:18:36.496 { 00:18:36.496 "name": null, 00:18:36.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.496 "is_configured": false, 00:18:36.496 "data_offset": 0, 00:18:36.496 "data_size": 65536 00:18:36.496 }, 00:18:36.496 { 00:18:36.496 "name": "BaseBdev2", 00:18:36.496 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:36.496 "is_configured": true, 00:18:36.496 "data_offset": 0, 00:18:36.496 "data_size": 65536 00:18:36.496 }, 00:18:36.496 { 00:18:36.496 "name": "BaseBdev3", 00:18:36.496 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:36.496 "is_configured": true, 00:18:36.496 "data_offset": 0, 00:18:36.496 "data_size": 65536 00:18:36.496 } 00:18:36.496 ] 00:18:36.496 }' 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.496 06:28:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.063 06:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:37.063 06:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.063 06:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.063 [2024-11-26 06:28:21.011709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.063 [2024-11-26 06:28:21.031657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:37.063 06:28:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.063 06:28:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:37.063 [2024-11-26 06:28:21.040730] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.001 "name": "raid_bdev1", 00:18:38.001 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:38.001 "strip_size_kb": 64, 00:18:38.001 "state": "online", 00:18:38.001 "raid_level": "raid5f", 00:18:38.001 "superblock": false, 00:18:38.001 "num_base_bdevs": 3, 00:18:38.001 "num_base_bdevs_discovered": 3, 00:18:38.001 "num_base_bdevs_operational": 3, 00:18:38.001 "process": { 00:18:38.001 "type": "rebuild", 00:18:38.001 "target": "spare", 00:18:38.001 "progress": { 00:18:38.001 "blocks": 18432, 00:18:38.001 "percent": 14 00:18:38.001 } 00:18:38.001 }, 00:18:38.001 "base_bdevs_list": [ 00:18:38.001 { 00:18:38.001 "name": "spare", 00:18:38.001 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:38.001 "is_configured": true, 00:18:38.001 "data_offset": 0, 00:18:38.001 "data_size": 65536 00:18:38.001 }, 00:18:38.001 { 00:18:38.001 "name": "BaseBdev2", 00:18:38.001 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:38.001 "is_configured": true, 00:18:38.001 "data_offset": 0, 00:18:38.001 "data_size": 65536 00:18:38.001 }, 00:18:38.001 { 00:18:38.001 "name": "BaseBdev3", 00:18:38.001 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:38.001 "is_configured": true, 00:18:38.001 "data_offset": 0, 00:18:38.001 "data_size": 65536 00:18:38.001 } 00:18:38.001 ] 00:18:38.001 }' 00:18:38.001 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.261 [2024-11-26 06:28:22.195677] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.261 [2024-11-26 06:28:22.255903] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:38.261 [2024-11-26 06:28:22.255975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.261 [2024-11-26 06:28:22.255996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:38.261 [2024-11-26 06:28:22.256004] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.261 "name": "raid_bdev1", 00:18:38.261 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:38.261 "strip_size_kb": 64, 00:18:38.261 "state": "online", 00:18:38.261 "raid_level": "raid5f", 00:18:38.261 "superblock": false, 00:18:38.261 "num_base_bdevs": 3, 00:18:38.261 "num_base_bdevs_discovered": 2, 00:18:38.261 "num_base_bdevs_operational": 2, 00:18:38.261 "base_bdevs_list": [ 00:18:38.261 { 00:18:38.261 "name": null, 00:18:38.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.261 "is_configured": false, 00:18:38.261 "data_offset": 0, 00:18:38.261 "data_size": 65536 00:18:38.261 }, 00:18:38.261 { 00:18:38.261 "name": "BaseBdev2", 00:18:38.261 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:38.261 "is_configured": true, 00:18:38.261 "data_offset": 0, 00:18:38.261 "data_size": 65536 00:18:38.261 }, 00:18:38.261 { 00:18:38.261 "name": "BaseBdev3", 00:18:38.261 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:38.261 "is_configured": true, 00:18:38.261 "data_offset": 0, 00:18:38.261 "data_size": 65536 00:18:38.261 } 00:18:38.261 ] 00:18:38.261 }' 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.261 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.864 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.864 "name": "raid_bdev1", 00:18:38.864 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:38.864 "strip_size_kb": 64, 00:18:38.864 "state": "online", 00:18:38.864 "raid_level": "raid5f", 00:18:38.864 "superblock": false, 00:18:38.864 "num_base_bdevs": 3, 00:18:38.864 "num_base_bdevs_discovered": 2, 00:18:38.864 "num_base_bdevs_operational": 2, 00:18:38.864 "base_bdevs_list": [ 00:18:38.864 { 00:18:38.864 "name": null, 00:18:38.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.864 "is_configured": false, 00:18:38.864 "data_offset": 0, 00:18:38.864 "data_size": 65536 00:18:38.865 }, 00:18:38.865 { 00:18:38.865 "name": "BaseBdev2", 00:18:38.865 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:38.865 "is_configured": true, 00:18:38.865 "data_offset": 0, 00:18:38.865 "data_size": 65536 00:18:38.865 }, 00:18:38.865 { 00:18:38.865 "name": "BaseBdev3", 00:18:38.865 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:38.865 "is_configured": true, 00:18:38.865 "data_offset": 0, 00:18:38.865 "data_size": 65536 00:18:38.865 } 00:18:38.865 ] 00:18:38.865 }' 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.865 [2024-11-26 06:28:22.906710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.865 [2024-11-26 06:28:22.925393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.865 06:28:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:38.865 [2024-11-26 06:28:22.934243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.809 06:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.069 06:28:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.069 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.069 "name": "raid_bdev1", 00:18:40.069 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:40.069 "strip_size_kb": 64, 00:18:40.069 "state": "online", 00:18:40.069 "raid_level": "raid5f", 00:18:40.069 "superblock": false, 00:18:40.069 "num_base_bdevs": 3, 00:18:40.069 "num_base_bdevs_discovered": 3, 00:18:40.069 "num_base_bdevs_operational": 3, 00:18:40.069 "process": { 00:18:40.069 "type": "rebuild", 00:18:40.069 "target": "spare", 00:18:40.069 "progress": { 00:18:40.069 "blocks": 20480, 00:18:40.069 "percent": 15 00:18:40.069 } 00:18:40.069 }, 00:18:40.069 "base_bdevs_list": [ 00:18:40.069 { 00:18:40.069 "name": "spare", 00:18:40.069 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:40.069 "is_configured": true, 00:18:40.069 "data_offset": 0, 00:18:40.069 "data_size": 65536 00:18:40.069 }, 00:18:40.069 { 00:18:40.069 "name": "BaseBdev2", 00:18:40.069 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:40.069 "is_configured": true, 00:18:40.069 "data_offset": 0, 00:18:40.069 "data_size": 65536 00:18:40.069 }, 00:18:40.069 { 00:18:40.069 "name": "BaseBdev3", 00:18:40.069 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:40.069 "is_configured": true, 00:18:40.069 "data_offset": 0, 00:18:40.069 "data_size": 65536 00:18:40.069 } 00:18:40.069 ] 00:18:40.069 }' 00:18:40.069 06:28:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=577 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.069 "name": "raid_bdev1", 00:18:40.069 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:40.069 "strip_size_kb": 64, 00:18:40.069 "state": "online", 00:18:40.069 "raid_level": "raid5f", 00:18:40.069 "superblock": false, 00:18:40.069 "num_base_bdevs": 3, 00:18:40.069 "num_base_bdevs_discovered": 3, 00:18:40.069 "num_base_bdevs_operational": 3, 00:18:40.069 "process": { 00:18:40.069 "type": "rebuild", 00:18:40.069 "target": "spare", 00:18:40.069 "progress": { 00:18:40.069 "blocks": 22528, 00:18:40.069 "percent": 17 00:18:40.069 } 00:18:40.069 }, 00:18:40.069 "base_bdevs_list": [ 00:18:40.069 { 00:18:40.069 "name": "spare", 00:18:40.069 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:40.069 "is_configured": true, 00:18:40.069 "data_offset": 0, 00:18:40.069 "data_size": 65536 00:18:40.069 }, 00:18:40.069 { 00:18:40.069 "name": "BaseBdev2", 00:18:40.069 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:40.069 "is_configured": true, 00:18:40.069 "data_offset": 0, 00:18:40.069 "data_size": 65536 00:18:40.069 }, 00:18:40.069 { 00:18:40.069 "name": "BaseBdev3", 00:18:40.069 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:40.069 "is_configured": true, 00:18:40.069 "data_offset": 0, 00:18:40.069 "data_size": 65536 00:18:40.069 } 00:18:40.069 ] 00:18:40.069 }' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.069 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.329 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.329 06:28:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.267 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.267 "name": "raid_bdev1", 00:18:41.267 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:41.267 "strip_size_kb": 64, 00:18:41.267 "state": "online", 00:18:41.267 "raid_level": "raid5f", 00:18:41.267 "superblock": false, 00:18:41.267 "num_base_bdevs": 3, 00:18:41.267 "num_base_bdevs_discovered": 3, 00:18:41.267 "num_base_bdevs_operational": 3, 00:18:41.267 "process": { 00:18:41.267 "type": "rebuild", 00:18:41.267 "target": "spare", 00:18:41.267 "progress": { 00:18:41.267 "blocks": 45056, 00:18:41.267 "percent": 34 00:18:41.267 } 00:18:41.268 }, 00:18:41.268 "base_bdevs_list": [ 00:18:41.268 { 00:18:41.268 "name": "spare", 00:18:41.268 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:41.268 "is_configured": true, 00:18:41.268 "data_offset": 0, 00:18:41.268 "data_size": 65536 00:18:41.268 }, 00:18:41.268 { 00:18:41.268 "name": "BaseBdev2", 00:18:41.268 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:41.268 "is_configured": true, 00:18:41.268 "data_offset": 0, 00:18:41.268 "data_size": 65536 00:18:41.268 }, 00:18:41.268 { 00:18:41.268 "name": "BaseBdev3", 00:18:41.268 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:41.268 "is_configured": true, 00:18:41.268 "data_offset": 0, 00:18:41.268 "data_size": 65536 00:18:41.268 } 00:18:41.268 ] 00:18:41.268 }' 00:18:41.268 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.268 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.268 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.268 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.268 06:28:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.642 "name": "raid_bdev1", 00:18:42.642 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:42.642 "strip_size_kb": 64, 00:18:42.642 "state": "online", 00:18:42.642 "raid_level": "raid5f", 00:18:42.642 "superblock": false, 00:18:42.642 "num_base_bdevs": 3, 00:18:42.642 "num_base_bdevs_discovered": 3, 00:18:42.642 "num_base_bdevs_operational": 3, 00:18:42.642 "process": { 00:18:42.642 "type": "rebuild", 00:18:42.642 "target": "spare", 00:18:42.642 "progress": { 00:18:42.642 "blocks": 69632, 00:18:42.642 "percent": 53 00:18:42.642 } 00:18:42.642 }, 00:18:42.642 "base_bdevs_list": [ 00:18:42.642 { 00:18:42.642 "name": "spare", 00:18:42.642 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:42.642 "is_configured": true, 00:18:42.642 "data_offset": 0, 00:18:42.642 "data_size": 65536 00:18:42.642 }, 00:18:42.642 { 00:18:42.642 "name": "BaseBdev2", 00:18:42.642 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:42.642 "is_configured": true, 00:18:42.642 "data_offset": 0, 00:18:42.642 "data_size": 65536 00:18:42.642 }, 00:18:42.642 { 00:18:42.642 "name": "BaseBdev3", 00:18:42.642 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:42.642 "is_configured": true, 00:18:42.642 "data_offset": 0, 00:18:42.642 "data_size": 65536 00:18:42.642 } 00:18:42.642 ] 00:18:42.642 }' 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.642 06:28:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.597 "name": "raid_bdev1", 00:18:43.597 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:43.597 "strip_size_kb": 64, 00:18:43.597 "state": "online", 00:18:43.597 "raid_level": "raid5f", 00:18:43.597 "superblock": false, 00:18:43.597 "num_base_bdevs": 3, 00:18:43.597 "num_base_bdevs_discovered": 3, 00:18:43.597 "num_base_bdevs_operational": 3, 00:18:43.597 "process": { 00:18:43.597 "type": "rebuild", 00:18:43.597 "target": "spare", 00:18:43.597 "progress": { 00:18:43.597 "blocks": 92160, 00:18:43.597 "percent": 70 00:18:43.597 } 00:18:43.597 }, 00:18:43.597 "base_bdevs_list": [ 00:18:43.597 { 00:18:43.597 "name": "spare", 00:18:43.597 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:43.597 "is_configured": true, 00:18:43.597 "data_offset": 0, 00:18:43.597 "data_size": 65536 00:18:43.597 }, 00:18:43.597 { 00:18:43.597 "name": "BaseBdev2", 00:18:43.597 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:43.597 "is_configured": true, 00:18:43.597 "data_offset": 0, 00:18:43.597 "data_size": 65536 00:18:43.597 }, 00:18:43.597 { 00:18:43.597 "name": "BaseBdev3", 00:18:43.597 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:43.597 "is_configured": true, 00:18:43.597 "data_offset": 0, 00:18:43.597 "data_size": 65536 00:18:43.597 } 00:18:43.597 ] 00:18:43.597 }' 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.597 06:28:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.535 06:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.795 "name": "raid_bdev1", 00:18:44.795 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:44.795 "strip_size_kb": 64, 00:18:44.795 "state": "online", 00:18:44.795 "raid_level": "raid5f", 00:18:44.795 "superblock": false, 00:18:44.795 "num_base_bdevs": 3, 00:18:44.795 "num_base_bdevs_discovered": 3, 00:18:44.795 "num_base_bdevs_operational": 3, 00:18:44.795 "process": { 00:18:44.795 "type": "rebuild", 00:18:44.795 "target": "spare", 00:18:44.795 "progress": { 00:18:44.795 "blocks": 114688, 00:18:44.795 "percent": 87 00:18:44.795 } 00:18:44.795 }, 00:18:44.795 "base_bdevs_list": [ 00:18:44.795 { 00:18:44.795 "name": "spare", 00:18:44.795 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:44.795 "is_configured": true, 00:18:44.795 "data_offset": 0, 00:18:44.795 "data_size": 65536 00:18:44.795 }, 00:18:44.795 { 00:18:44.795 "name": "BaseBdev2", 00:18:44.795 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:44.795 "is_configured": true, 00:18:44.795 "data_offset": 0, 00:18:44.795 "data_size": 65536 00:18:44.795 }, 00:18:44.795 { 00:18:44.795 "name": "BaseBdev3", 00:18:44.795 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:44.795 "is_configured": true, 00:18:44.795 "data_offset": 0, 00:18:44.795 "data_size": 65536 00:18:44.795 } 00:18:44.795 ] 00:18:44.795 }' 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:44.795 06:28:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:45.364 [2024-11-26 06:28:29.411204] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:45.364 [2024-11-26 06:28:29.411355] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:45.364 [2024-11-26 06:28:29.411414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.935 "name": "raid_bdev1", 00:18:45.935 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:45.935 "strip_size_kb": 64, 00:18:45.935 "state": "online", 00:18:45.935 "raid_level": "raid5f", 00:18:45.935 "superblock": false, 00:18:45.935 "num_base_bdevs": 3, 00:18:45.935 "num_base_bdevs_discovered": 3, 00:18:45.935 "num_base_bdevs_operational": 3, 00:18:45.935 "base_bdevs_list": [ 00:18:45.935 { 00:18:45.935 "name": "spare", 00:18:45.935 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:45.935 "is_configured": true, 00:18:45.935 "data_offset": 0, 00:18:45.935 "data_size": 65536 00:18:45.935 }, 00:18:45.935 { 00:18:45.935 "name": "BaseBdev2", 00:18:45.935 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:45.935 "is_configured": true, 00:18:45.935 "data_offset": 0, 00:18:45.935 "data_size": 65536 00:18:45.935 }, 00:18:45.935 { 00:18:45.935 "name": "BaseBdev3", 00:18:45.935 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:45.935 "is_configured": true, 00:18:45.935 "data_offset": 0, 00:18:45.935 "data_size": 65536 00:18:45.935 } 00:18:45.935 ] 00:18:45.935 }' 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.935 "name": "raid_bdev1", 00:18:45.935 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:45.935 "strip_size_kb": 64, 00:18:45.935 "state": "online", 00:18:45.935 "raid_level": "raid5f", 00:18:45.935 "superblock": false, 00:18:45.935 "num_base_bdevs": 3, 00:18:45.935 "num_base_bdevs_discovered": 3, 00:18:45.935 "num_base_bdevs_operational": 3, 00:18:45.935 "base_bdevs_list": [ 00:18:45.935 { 00:18:45.935 "name": "spare", 00:18:45.935 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:45.935 "is_configured": true, 00:18:45.935 "data_offset": 0, 00:18:45.935 "data_size": 65536 00:18:45.935 }, 00:18:45.935 { 00:18:45.935 "name": "BaseBdev2", 00:18:45.935 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:45.935 "is_configured": true, 00:18:45.935 "data_offset": 0, 00:18:45.935 "data_size": 65536 00:18:45.935 }, 00:18:45.935 { 00:18:45.935 "name": "BaseBdev3", 00:18:45.935 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:45.935 "is_configured": true, 00:18:45.935 "data_offset": 0, 00:18:45.935 "data_size": 65536 00:18:45.935 } 00:18:45.935 ] 00:18:45.935 }' 00:18:45.935 06:28:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.935 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.935 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.195 "name": "raid_bdev1", 00:18:46.195 "uuid": "5a512515-8680-4b78-9dd2-d541335ce1d9", 00:18:46.195 "strip_size_kb": 64, 00:18:46.195 "state": "online", 00:18:46.195 "raid_level": "raid5f", 00:18:46.195 "superblock": false, 00:18:46.195 "num_base_bdevs": 3, 00:18:46.195 "num_base_bdevs_discovered": 3, 00:18:46.195 "num_base_bdevs_operational": 3, 00:18:46.195 "base_bdevs_list": [ 00:18:46.195 { 00:18:46.195 "name": "spare", 00:18:46.195 "uuid": "344c5ec1-922c-5d2b-867b-066ed8bf5109", 00:18:46.195 "is_configured": true, 00:18:46.195 "data_offset": 0, 00:18:46.195 "data_size": 65536 00:18:46.195 }, 00:18:46.195 { 00:18:46.195 "name": "BaseBdev2", 00:18:46.195 "uuid": "be0a5c23-1437-564a-bf55-15eaabcf4af8", 00:18:46.195 "is_configured": true, 00:18:46.195 "data_offset": 0, 00:18:46.195 "data_size": 65536 00:18:46.195 }, 00:18:46.195 { 00:18:46.195 "name": "BaseBdev3", 00:18:46.195 "uuid": "48c9263b-773a-54eb-8c47-e2b1280ddc00", 00:18:46.195 "is_configured": true, 00:18:46.195 "data_offset": 0, 00:18:46.195 "data_size": 65536 00:18:46.195 } 00:18:46.195 ] 00:18:46.195 }' 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.195 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.454 [2024-11-26 06:28:30.528557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.454 [2024-11-26 06:28:30.528600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.454 [2024-11-26 06:28:30.528740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.454 [2024-11-26 06:28:30.528899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.454 [2024-11-26 06:28:30.528928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.454 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:46.713 /dev/nbd0 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.713 1+0 records in 00:18:46.713 1+0 records out 00:18:46.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379832 s, 10.8 MB/s 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.713 06:28:30 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:46.973 /dev/nbd1 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.973 1+0 records in 00:18:46.973 1+0 records out 00:18:46.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378395 s, 10.8 MB/s 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.973 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.232 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.489 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82143 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 82143 ']' 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 82143 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:47.746 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.747 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82143 00:18:47.747 killing process with pid 82143 00:18:47.747 Received shutdown signal, test time was about 60.000000 seconds 00:18:47.747 00:18:47.747 Latency(us) 00:18:47.747 [2024-11-26T06:28:31.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.747 [2024-11-26T06:28:31.884Z] =================================================================================================================== 00:18:47.747 [2024-11-26T06:28:31.884Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.747 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.747 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.747 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82143' 00:18:47.747 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 82143 00:18:47.747 [2024-11-26 06:28:31.808048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.747 06:28:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 82143 00:18:48.357 [2024-11-26 06:28:32.258450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.736 06:28:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:49.736 00:18:49.736 real 0m15.651s 00:18:49.736 user 0m19.002s 00:18:49.736 sys 0m2.250s 00:18:49.736 06:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:49.736 06:28:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.736 ************************************ 00:18:49.736 END TEST raid5f_rebuild_test 00:18:49.736 ************************************ 00:18:49.736 06:28:33 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:49.736 06:28:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:49.736 06:28:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:49.737 06:28:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.737 ************************************ 00:18:49.737 START TEST raid5f_rebuild_test_sb 00:18:49.737 ************************************ 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82589 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82589 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82589 ']' 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.737 06:28:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.737 [2024-11-26 06:28:33.701085] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:18:49.737 [2024-11-26 06:28:33.701342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82589 ] 00:18:49.737 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:49.737 Zero copy mechanism will not be used. 00:18:49.737 [2024-11-26 06:28:33.863644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.996 [2024-11-26 06:28:34.010509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.255 [2024-11-26 06:28:34.257614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.255 [2024-11-26 06:28:34.257772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.516 BaseBdev1_malloc 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.516 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 [2024-11-26 06:28:34.647796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:50.777 [2024-11-26 06:28:34.647881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.777 [2024-11-26 06:28:34.647910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:50.777 [2024-11-26 06:28:34.647924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.777 [2024-11-26 06:28:34.650779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.777 [2024-11-26 06:28:34.650819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.777 BaseBdev1 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 BaseBdev2_malloc 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 [2024-11-26 06:28:34.711026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:50.777 [2024-11-26 06:28:34.711181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.777 [2024-11-26 06:28:34.711233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:50.777 [2024-11-26 06:28:34.711279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.777 [2024-11-26 06:28:34.714043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.777 [2024-11-26 06:28:34.714140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:50.777 BaseBdev2 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 BaseBdev3_malloc 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 [2024-11-26 06:28:34.786897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:50.777 [2024-11-26 06:28:34.787033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.777 [2024-11-26 06:28:34.787096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:50.777 [2024-11-26 06:28:34.787139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.777 [2024-11-26 06:28:34.789846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.777 [2024-11-26 06:28:34.789931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:50.777 BaseBdev3 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 spare_malloc 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 spare_delay 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 [2024-11-26 06:28:34.862805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:50.777 [2024-11-26 06:28:34.862928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.777 [2024-11-26 06:28:34.862967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:50.777 [2024-11-26 06:28:34.863015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.777 [2024-11-26 06:28:34.865776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.777 [2024-11-26 06:28:34.865872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:50.777 spare 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.777 [2024-11-26 06:28:34.874877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.777 [2024-11-26 06:28:34.877282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.777 [2024-11-26 06:28:34.877411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.777 [2024-11-26 06:28:34.877689] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:50.777 [2024-11-26 06:28:34.877743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:50.777 [2024-11-26 06:28:34.878092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:50.777 [2024-11-26 06:28:34.884594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:50.777 [2024-11-26 06:28:34.884660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:50.777 [2024-11-26 06:28:34.884946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.777 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.037 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.037 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.037 "name": "raid_bdev1", 00:18:51.037 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:51.037 "strip_size_kb": 64, 00:18:51.037 "state": "online", 00:18:51.037 "raid_level": "raid5f", 00:18:51.037 "superblock": true, 00:18:51.037 "num_base_bdevs": 3, 00:18:51.037 "num_base_bdevs_discovered": 3, 00:18:51.037 "num_base_bdevs_operational": 3, 00:18:51.037 "base_bdevs_list": [ 00:18:51.037 { 00:18:51.037 "name": "BaseBdev1", 00:18:51.037 "uuid": "db637a29-02bf-59ab-aa20-69f64e13777b", 00:18:51.037 "is_configured": true, 00:18:51.037 "data_offset": 2048, 00:18:51.037 "data_size": 63488 00:18:51.037 }, 00:18:51.037 { 00:18:51.037 "name": "BaseBdev2", 00:18:51.037 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:51.037 "is_configured": true, 00:18:51.037 "data_offset": 2048, 00:18:51.037 "data_size": 63488 00:18:51.037 }, 00:18:51.037 { 00:18:51.037 "name": "BaseBdev3", 00:18:51.037 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:51.037 "is_configured": true, 00:18:51.037 "data_offset": 2048, 00:18:51.037 "data_size": 63488 00:18:51.037 } 00:18:51.037 ] 00:18:51.037 }' 00:18:51.037 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.037 06:28:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.297 [2024-11-26 06:28:35.336901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.297 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:51.558 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:51.558 [2024-11-26 06:28:35.652278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:51.558 /dev/nbd0 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:51.819 1+0 records in 00:18:51.819 1+0 records out 00:18:51.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358358 s, 11.4 MB/s 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:51.819 06:28:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:52.079 496+0 records in 00:18:52.079 496+0 records out 00:18:52.079 65011712 bytes (65 MB, 62 MiB) copied, 0.39124 s, 166 MB/s 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:52.079 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:52.339 [2024-11-26 06:28:36.322901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.339 [2024-11-26 06:28:36.360628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.339 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.340 "name": "raid_bdev1", 00:18:52.340 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:52.340 "strip_size_kb": 64, 00:18:52.340 "state": "online", 00:18:52.340 "raid_level": "raid5f", 00:18:52.340 "superblock": true, 00:18:52.340 "num_base_bdevs": 3, 00:18:52.340 "num_base_bdevs_discovered": 2, 00:18:52.340 "num_base_bdevs_operational": 2, 00:18:52.340 "base_bdevs_list": [ 00:18:52.340 { 00:18:52.340 "name": null, 00:18:52.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.340 "is_configured": false, 00:18:52.340 "data_offset": 0, 00:18:52.340 "data_size": 63488 00:18:52.340 }, 00:18:52.340 { 00:18:52.340 "name": "BaseBdev2", 00:18:52.340 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:52.340 "is_configured": true, 00:18:52.340 "data_offset": 2048, 00:18:52.340 "data_size": 63488 00:18:52.340 }, 00:18:52.340 { 00:18:52.340 "name": "BaseBdev3", 00:18:52.340 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:52.340 "is_configured": true, 00:18:52.340 "data_offset": 2048, 00:18:52.340 "data_size": 63488 00:18:52.340 } 00:18:52.340 ] 00:18:52.340 }' 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.340 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.910 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:52.910 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.910 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.910 [2024-11-26 06:28:36.827893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:52.910 [2024-11-26 06:28:36.849044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:52.910 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.910 06:28:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:52.910 [2024-11-26 06:28:36.858897] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.915 "name": "raid_bdev1", 00:18:53.915 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:53.915 "strip_size_kb": 64, 00:18:53.915 "state": "online", 00:18:53.915 "raid_level": "raid5f", 00:18:53.915 "superblock": true, 00:18:53.915 "num_base_bdevs": 3, 00:18:53.915 "num_base_bdevs_discovered": 3, 00:18:53.915 "num_base_bdevs_operational": 3, 00:18:53.915 "process": { 00:18:53.915 "type": "rebuild", 00:18:53.915 "target": "spare", 00:18:53.915 "progress": { 00:18:53.915 "blocks": 18432, 00:18:53.915 "percent": 14 00:18:53.915 } 00:18:53.915 }, 00:18:53.915 "base_bdevs_list": [ 00:18:53.915 { 00:18:53.915 "name": "spare", 00:18:53.915 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 }, 00:18:53.915 { 00:18:53.915 "name": "BaseBdev2", 00:18:53.915 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 }, 00:18:53.915 { 00:18:53.915 "name": "BaseBdev3", 00:18:53.915 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:53.915 "is_configured": true, 00:18:53.915 "data_offset": 2048, 00:18:53.915 "data_size": 63488 00:18:53.915 } 00:18:53.915 ] 00:18:53.915 }' 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.915 06:28:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.915 [2024-11-26 06:28:37.994318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.176 [2024-11-26 06:28:38.073403] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:54.176 [2024-11-26 06:28:38.073486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.176 [2024-11-26 06:28:38.073512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:54.176 [2024-11-26 06:28:38.073523] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.176 "name": "raid_bdev1", 00:18:54.176 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:54.176 "strip_size_kb": 64, 00:18:54.176 "state": "online", 00:18:54.176 "raid_level": "raid5f", 00:18:54.176 "superblock": true, 00:18:54.176 "num_base_bdevs": 3, 00:18:54.176 "num_base_bdevs_discovered": 2, 00:18:54.176 "num_base_bdevs_operational": 2, 00:18:54.176 "base_bdevs_list": [ 00:18:54.176 { 00:18:54.176 "name": null, 00:18:54.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.176 "is_configured": false, 00:18:54.176 "data_offset": 0, 00:18:54.176 "data_size": 63488 00:18:54.176 }, 00:18:54.176 { 00:18:54.176 "name": "BaseBdev2", 00:18:54.176 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:54.176 "is_configured": true, 00:18:54.176 "data_offset": 2048, 00:18:54.176 "data_size": 63488 00:18:54.176 }, 00:18:54.176 { 00:18:54.176 "name": "BaseBdev3", 00:18:54.176 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:54.176 "is_configured": true, 00:18:54.176 "data_offset": 2048, 00:18:54.176 "data_size": 63488 00:18:54.176 } 00:18:54.176 ] 00:18:54.176 }' 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.176 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.747 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.747 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.747 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.748 "name": "raid_bdev1", 00:18:54.748 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:54.748 "strip_size_kb": 64, 00:18:54.748 "state": "online", 00:18:54.748 "raid_level": "raid5f", 00:18:54.748 "superblock": true, 00:18:54.748 "num_base_bdevs": 3, 00:18:54.748 "num_base_bdevs_discovered": 2, 00:18:54.748 "num_base_bdevs_operational": 2, 00:18:54.748 "base_bdevs_list": [ 00:18:54.748 { 00:18:54.748 "name": null, 00:18:54.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.748 "is_configured": false, 00:18:54.748 "data_offset": 0, 00:18:54.748 "data_size": 63488 00:18:54.748 }, 00:18:54.748 { 00:18:54.748 "name": "BaseBdev2", 00:18:54.748 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:54.748 "is_configured": true, 00:18:54.748 "data_offset": 2048, 00:18:54.748 "data_size": 63488 00:18:54.748 }, 00:18:54.748 { 00:18:54.748 "name": "BaseBdev3", 00:18:54.748 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:54.748 "is_configured": true, 00:18:54.748 "data_offset": 2048, 00:18:54.748 "data_size": 63488 00:18:54.748 } 00:18:54.748 ] 00:18:54.748 }' 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.748 [2024-11-26 06:28:38.727499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.748 [2024-11-26 06:28:38.747285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.748 06:28:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:54.748 [2024-11-26 06:28:38.756577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.690 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.690 "name": "raid_bdev1", 00:18:55.690 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:55.690 "strip_size_kb": 64, 00:18:55.690 "state": "online", 00:18:55.690 "raid_level": "raid5f", 00:18:55.690 "superblock": true, 00:18:55.690 "num_base_bdevs": 3, 00:18:55.690 "num_base_bdevs_discovered": 3, 00:18:55.690 "num_base_bdevs_operational": 3, 00:18:55.690 "process": { 00:18:55.690 "type": "rebuild", 00:18:55.690 "target": "spare", 00:18:55.690 "progress": { 00:18:55.690 "blocks": 18432, 00:18:55.690 "percent": 14 00:18:55.690 } 00:18:55.690 }, 00:18:55.690 "base_bdevs_list": [ 00:18:55.690 { 00:18:55.690 "name": "spare", 00:18:55.690 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:18:55.691 "is_configured": true, 00:18:55.691 "data_offset": 2048, 00:18:55.691 "data_size": 63488 00:18:55.691 }, 00:18:55.691 { 00:18:55.691 "name": "BaseBdev2", 00:18:55.691 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:55.691 "is_configured": true, 00:18:55.691 "data_offset": 2048, 00:18:55.691 "data_size": 63488 00:18:55.691 }, 00:18:55.691 { 00:18:55.691 "name": "BaseBdev3", 00:18:55.691 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:55.691 "is_configured": true, 00:18:55.691 "data_offset": 2048, 00:18:55.691 "data_size": 63488 00:18:55.691 } 00:18:55.691 ] 00:18:55.691 }' 00:18:55.691 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:55.951 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=592 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.951 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.951 "name": "raid_bdev1", 00:18:55.952 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:55.952 "strip_size_kb": 64, 00:18:55.952 "state": "online", 00:18:55.952 "raid_level": "raid5f", 00:18:55.952 "superblock": true, 00:18:55.952 "num_base_bdevs": 3, 00:18:55.952 "num_base_bdevs_discovered": 3, 00:18:55.952 "num_base_bdevs_operational": 3, 00:18:55.952 "process": { 00:18:55.952 "type": "rebuild", 00:18:55.952 "target": "spare", 00:18:55.952 "progress": { 00:18:55.952 "blocks": 22528, 00:18:55.952 "percent": 17 00:18:55.952 } 00:18:55.952 }, 00:18:55.952 "base_bdevs_list": [ 00:18:55.952 { 00:18:55.952 "name": "spare", 00:18:55.952 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:18:55.952 "is_configured": true, 00:18:55.952 "data_offset": 2048, 00:18:55.952 "data_size": 63488 00:18:55.952 }, 00:18:55.952 { 00:18:55.952 "name": "BaseBdev2", 00:18:55.952 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:55.952 "is_configured": true, 00:18:55.952 "data_offset": 2048, 00:18:55.952 "data_size": 63488 00:18:55.952 }, 00:18:55.952 { 00:18:55.952 "name": "BaseBdev3", 00:18:55.952 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:55.952 "is_configured": true, 00:18:55.952 "data_offset": 2048, 00:18:55.952 "data_size": 63488 00:18:55.952 } 00:18:55.952 ] 00:18:55.952 }' 00:18:55.952 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.952 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.952 06:28:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.952 06:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.952 06:28:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.923 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.924 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.924 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.182 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.183 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.183 "name": "raid_bdev1", 00:18:57.183 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:57.183 "strip_size_kb": 64, 00:18:57.183 "state": "online", 00:18:57.183 "raid_level": "raid5f", 00:18:57.183 "superblock": true, 00:18:57.183 "num_base_bdevs": 3, 00:18:57.183 "num_base_bdevs_discovered": 3, 00:18:57.183 "num_base_bdevs_operational": 3, 00:18:57.183 "process": { 00:18:57.183 "type": "rebuild", 00:18:57.183 "target": "spare", 00:18:57.183 "progress": { 00:18:57.183 "blocks": 45056, 00:18:57.183 "percent": 35 00:18:57.183 } 00:18:57.183 }, 00:18:57.183 "base_bdevs_list": [ 00:18:57.183 { 00:18:57.183 "name": "spare", 00:18:57.183 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:18:57.183 "is_configured": true, 00:18:57.183 "data_offset": 2048, 00:18:57.183 "data_size": 63488 00:18:57.183 }, 00:18:57.183 { 00:18:57.183 "name": "BaseBdev2", 00:18:57.183 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:57.183 "is_configured": true, 00:18:57.183 "data_offset": 2048, 00:18:57.183 "data_size": 63488 00:18:57.183 }, 00:18:57.183 { 00:18:57.183 "name": "BaseBdev3", 00:18:57.183 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:57.183 "is_configured": true, 00:18:57.183 "data_offset": 2048, 00:18:57.183 "data_size": 63488 00:18:57.183 } 00:18:57.183 ] 00:18:57.183 }' 00:18:57.183 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.183 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.183 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.183 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.183 06:28:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.173 "name": "raid_bdev1", 00:18:58.173 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:58.173 "strip_size_kb": 64, 00:18:58.173 "state": "online", 00:18:58.173 "raid_level": "raid5f", 00:18:58.173 "superblock": true, 00:18:58.173 "num_base_bdevs": 3, 00:18:58.173 "num_base_bdevs_discovered": 3, 00:18:58.173 "num_base_bdevs_operational": 3, 00:18:58.173 "process": { 00:18:58.173 "type": "rebuild", 00:18:58.173 "target": "spare", 00:18:58.173 "progress": { 00:18:58.173 "blocks": 69632, 00:18:58.173 "percent": 54 00:18:58.173 } 00:18:58.173 }, 00:18:58.173 "base_bdevs_list": [ 00:18:58.173 { 00:18:58.173 "name": "spare", 00:18:58.173 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:18:58.173 "is_configured": true, 00:18:58.173 "data_offset": 2048, 00:18:58.173 "data_size": 63488 00:18:58.173 }, 00:18:58.173 { 00:18:58.173 "name": "BaseBdev2", 00:18:58.173 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:58.173 "is_configured": true, 00:18:58.173 "data_offset": 2048, 00:18:58.173 "data_size": 63488 00:18:58.173 }, 00:18:58.173 { 00:18:58.173 "name": "BaseBdev3", 00:18:58.173 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:58.173 "is_configured": true, 00:18:58.173 "data_offset": 2048, 00:18:58.173 "data_size": 63488 00:18:58.173 } 00:18:58.173 ] 00:18:58.173 }' 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.173 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.431 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.431 06:28:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.367 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.367 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.367 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.367 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.367 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.368 "name": "raid_bdev1", 00:18:59.368 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:18:59.368 "strip_size_kb": 64, 00:18:59.368 "state": "online", 00:18:59.368 "raid_level": "raid5f", 00:18:59.368 "superblock": true, 00:18:59.368 "num_base_bdevs": 3, 00:18:59.368 "num_base_bdevs_discovered": 3, 00:18:59.368 "num_base_bdevs_operational": 3, 00:18:59.368 "process": { 00:18:59.368 "type": "rebuild", 00:18:59.368 "target": "spare", 00:18:59.368 "progress": { 00:18:59.368 "blocks": 92160, 00:18:59.368 "percent": 72 00:18:59.368 } 00:18:59.368 }, 00:18:59.368 "base_bdevs_list": [ 00:18:59.368 { 00:18:59.368 "name": "spare", 00:18:59.368 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:18:59.368 "is_configured": true, 00:18:59.368 "data_offset": 2048, 00:18:59.368 "data_size": 63488 00:18:59.368 }, 00:18:59.368 { 00:18:59.368 "name": "BaseBdev2", 00:18:59.368 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:18:59.368 "is_configured": true, 00:18:59.368 "data_offset": 2048, 00:18:59.368 "data_size": 63488 00:18:59.368 }, 00:18:59.368 { 00:18:59.368 "name": "BaseBdev3", 00:18:59.368 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:18:59.368 "is_configured": true, 00:18:59.368 "data_offset": 2048, 00:18:59.368 "data_size": 63488 00:18:59.368 } 00:18:59.368 ] 00:18:59.368 }' 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.368 06:28:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.774 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.775 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.775 "name": "raid_bdev1", 00:19:00.775 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:00.775 "strip_size_kb": 64, 00:19:00.775 "state": "online", 00:19:00.775 "raid_level": "raid5f", 00:19:00.775 "superblock": true, 00:19:00.775 "num_base_bdevs": 3, 00:19:00.775 "num_base_bdevs_discovered": 3, 00:19:00.775 "num_base_bdevs_operational": 3, 00:19:00.775 "process": { 00:19:00.775 "type": "rebuild", 00:19:00.775 "target": "spare", 00:19:00.775 "progress": { 00:19:00.775 "blocks": 114688, 00:19:00.775 "percent": 90 00:19:00.775 } 00:19:00.775 }, 00:19:00.775 "base_bdevs_list": [ 00:19:00.775 { 00:19:00.775 "name": "spare", 00:19:00.775 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:00.775 "is_configured": true, 00:19:00.775 "data_offset": 2048, 00:19:00.775 "data_size": 63488 00:19:00.775 }, 00:19:00.775 { 00:19:00.775 "name": "BaseBdev2", 00:19:00.775 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:00.775 "is_configured": true, 00:19:00.775 "data_offset": 2048, 00:19:00.775 "data_size": 63488 00:19:00.775 }, 00:19:00.775 { 00:19:00.775 "name": "BaseBdev3", 00:19:00.775 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:00.775 "is_configured": true, 00:19:00.775 "data_offset": 2048, 00:19:00.775 "data_size": 63488 00:19:00.775 } 00:19:00.775 ] 00:19:00.775 }' 00:19:00.775 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.775 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:00.775 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.775 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:00.775 06:28:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:01.035 [2024-11-26 06:28:45.025802] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:01.035 [2024-11-26 06:28:45.025920] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:01.035 [2024-11-26 06:28:45.026070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.604 "name": "raid_bdev1", 00:19:01.604 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:01.604 "strip_size_kb": 64, 00:19:01.604 "state": "online", 00:19:01.604 "raid_level": "raid5f", 00:19:01.604 "superblock": true, 00:19:01.604 "num_base_bdevs": 3, 00:19:01.604 "num_base_bdevs_discovered": 3, 00:19:01.604 "num_base_bdevs_operational": 3, 00:19:01.604 "base_bdevs_list": [ 00:19:01.604 { 00:19:01.604 "name": "spare", 00:19:01.604 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:01.604 "is_configured": true, 00:19:01.604 "data_offset": 2048, 00:19:01.604 "data_size": 63488 00:19:01.604 }, 00:19:01.604 { 00:19:01.604 "name": "BaseBdev2", 00:19:01.604 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:01.604 "is_configured": true, 00:19:01.604 "data_offset": 2048, 00:19:01.604 "data_size": 63488 00:19:01.604 }, 00:19:01.604 { 00:19:01.604 "name": "BaseBdev3", 00:19:01.604 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:01.604 "is_configured": true, 00:19:01.604 "data_offset": 2048, 00:19:01.604 "data_size": 63488 00:19:01.604 } 00:19:01.604 ] 00:19:01.604 }' 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.604 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.865 "name": "raid_bdev1", 00:19:01.865 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:01.865 "strip_size_kb": 64, 00:19:01.865 "state": "online", 00:19:01.865 "raid_level": "raid5f", 00:19:01.865 "superblock": true, 00:19:01.865 "num_base_bdevs": 3, 00:19:01.865 "num_base_bdevs_discovered": 3, 00:19:01.865 "num_base_bdevs_operational": 3, 00:19:01.865 "base_bdevs_list": [ 00:19:01.865 { 00:19:01.865 "name": "spare", 00:19:01.865 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:01.865 "is_configured": true, 00:19:01.865 "data_offset": 2048, 00:19:01.865 "data_size": 63488 00:19:01.865 }, 00:19:01.865 { 00:19:01.865 "name": "BaseBdev2", 00:19:01.865 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:01.865 "is_configured": true, 00:19:01.865 "data_offset": 2048, 00:19:01.865 "data_size": 63488 00:19:01.865 }, 00:19:01.865 { 00:19:01.865 "name": "BaseBdev3", 00:19:01.865 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:01.865 "is_configured": true, 00:19:01.865 "data_offset": 2048, 00:19:01.865 "data_size": 63488 00:19:01.865 } 00:19:01.865 ] 00:19:01.865 }' 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.125 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.125 "name": "raid_bdev1", 00:19:02.125 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:02.125 "strip_size_kb": 64, 00:19:02.125 "state": "online", 00:19:02.125 "raid_level": "raid5f", 00:19:02.125 "superblock": true, 00:19:02.125 "num_base_bdevs": 3, 00:19:02.125 "num_base_bdevs_discovered": 3, 00:19:02.125 "num_base_bdevs_operational": 3, 00:19:02.125 "base_bdevs_list": [ 00:19:02.125 { 00:19:02.125 "name": "spare", 00:19:02.125 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:02.125 "is_configured": true, 00:19:02.125 "data_offset": 2048, 00:19:02.125 "data_size": 63488 00:19:02.125 }, 00:19:02.125 { 00:19:02.125 "name": "BaseBdev2", 00:19:02.125 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:02.125 "is_configured": true, 00:19:02.125 "data_offset": 2048, 00:19:02.125 "data_size": 63488 00:19:02.125 }, 00:19:02.125 { 00:19:02.125 "name": "BaseBdev3", 00:19:02.125 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:02.125 "is_configured": true, 00:19:02.125 "data_offset": 2048, 00:19:02.125 "data_size": 63488 00:19:02.125 } 00:19:02.125 ] 00:19:02.125 }' 00:19:02.125 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.125 06:28:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.384 [2024-11-26 06:28:46.335140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.384 [2024-11-26 06:28:46.335179] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.384 [2024-11-26 06:28:46.335303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.384 [2024-11-26 06:28:46.335451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.384 [2024-11-26 06:28:46.335479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.384 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:02.643 /dev/nbd0 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.643 1+0 records in 00:19:02.643 1+0 records out 00:19:02.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373973 s, 11.0 MB/s 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.643 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.644 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:02.902 /dev/nbd1 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.902 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.902 1+0 records in 00:19:02.902 1+0 records out 00:19:02.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393477 s, 10.4 MB/s 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.903 06:28:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.161 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.421 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.682 [2024-11-26 06:28:47.587216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.682 [2024-11-26 06:28:47.587297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.682 [2024-11-26 06:28:47.587322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:03.682 [2024-11-26 06:28:47.587336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.682 [2024-11-26 06:28:47.590254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.682 [2024-11-26 06:28:47.590298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.682 [2024-11-26 06:28:47.590406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:03.682 [2024-11-26 06:28:47.590488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.682 [2024-11-26 06:28:47.590706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.682 [2024-11-26 06:28:47.590853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.682 spare 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.682 [2024-11-26 06:28:47.690772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:03.682 [2024-11-26 06:28:47.690809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:03.682 [2024-11-26 06:28:47.691174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:03.682 [2024-11-26 06:28:47.696671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:03.682 [2024-11-26 06:28:47.696696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:03.682 [2024-11-26 06:28:47.696954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.682 "name": "raid_bdev1", 00:19:03.682 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:03.682 "strip_size_kb": 64, 00:19:03.682 "state": "online", 00:19:03.682 "raid_level": "raid5f", 00:19:03.682 "superblock": true, 00:19:03.682 "num_base_bdevs": 3, 00:19:03.682 "num_base_bdevs_discovered": 3, 00:19:03.682 "num_base_bdevs_operational": 3, 00:19:03.682 "base_bdevs_list": [ 00:19:03.682 { 00:19:03.682 "name": "spare", 00:19:03.682 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:03.682 "is_configured": true, 00:19:03.682 "data_offset": 2048, 00:19:03.682 "data_size": 63488 00:19:03.682 }, 00:19:03.682 { 00:19:03.682 "name": "BaseBdev2", 00:19:03.682 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:03.682 "is_configured": true, 00:19:03.682 "data_offset": 2048, 00:19:03.682 "data_size": 63488 00:19:03.682 }, 00:19:03.682 { 00:19:03.682 "name": "BaseBdev3", 00:19:03.682 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:03.682 "is_configured": true, 00:19:03.682 "data_offset": 2048, 00:19:03.682 "data_size": 63488 00:19:03.682 } 00:19:03.682 ] 00:19:03.682 }' 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.682 06:28:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.251 "name": "raid_bdev1", 00:19:04.251 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:04.251 "strip_size_kb": 64, 00:19:04.251 "state": "online", 00:19:04.251 "raid_level": "raid5f", 00:19:04.251 "superblock": true, 00:19:04.251 "num_base_bdevs": 3, 00:19:04.251 "num_base_bdevs_discovered": 3, 00:19:04.251 "num_base_bdevs_operational": 3, 00:19:04.251 "base_bdevs_list": [ 00:19:04.251 { 00:19:04.251 "name": "spare", 00:19:04.251 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:04.251 "is_configured": true, 00:19:04.251 "data_offset": 2048, 00:19:04.251 "data_size": 63488 00:19:04.251 }, 00:19:04.251 { 00:19:04.251 "name": "BaseBdev2", 00:19:04.251 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:04.251 "is_configured": true, 00:19:04.251 "data_offset": 2048, 00:19:04.251 "data_size": 63488 00:19:04.251 }, 00:19:04.251 { 00:19:04.251 "name": "BaseBdev3", 00:19:04.251 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:04.251 "is_configured": true, 00:19:04.251 "data_offset": 2048, 00:19:04.251 "data_size": 63488 00:19:04.251 } 00:19:04.251 ] 00:19:04.251 }' 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.251 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.252 [2024-11-26 06:28:48.359508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.252 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.511 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.511 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.511 "name": "raid_bdev1", 00:19:04.511 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:04.511 "strip_size_kb": 64, 00:19:04.511 "state": "online", 00:19:04.511 "raid_level": "raid5f", 00:19:04.511 "superblock": true, 00:19:04.511 "num_base_bdevs": 3, 00:19:04.511 "num_base_bdevs_discovered": 2, 00:19:04.511 "num_base_bdevs_operational": 2, 00:19:04.511 "base_bdevs_list": [ 00:19:04.511 { 00:19:04.511 "name": null, 00:19:04.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.511 "is_configured": false, 00:19:04.511 "data_offset": 0, 00:19:04.511 "data_size": 63488 00:19:04.511 }, 00:19:04.511 { 00:19:04.511 "name": "BaseBdev2", 00:19:04.511 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:04.511 "is_configured": true, 00:19:04.511 "data_offset": 2048, 00:19:04.511 "data_size": 63488 00:19:04.511 }, 00:19:04.511 { 00:19:04.511 "name": "BaseBdev3", 00:19:04.511 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:04.511 "is_configured": true, 00:19:04.511 "data_offset": 2048, 00:19:04.511 "data_size": 63488 00:19:04.511 } 00:19:04.511 ] 00:19:04.511 }' 00:19:04.511 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.511 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.772 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:04.772 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.772 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.772 [2024-11-26 06:28:48.806758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.772 [2024-11-26 06:28:48.807000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:04.772 [2024-11-26 06:28:48.807058] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:04.772 [2024-11-26 06:28:48.807115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:04.772 [2024-11-26 06:28:48.824357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:04.772 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.772 06:28:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:04.772 [2024-11-26 06:28:48.832023] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.719 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.978 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.979 "name": "raid_bdev1", 00:19:05.979 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:05.979 "strip_size_kb": 64, 00:19:05.979 "state": "online", 00:19:05.979 "raid_level": "raid5f", 00:19:05.979 "superblock": true, 00:19:05.979 "num_base_bdevs": 3, 00:19:05.979 "num_base_bdevs_discovered": 3, 00:19:05.979 "num_base_bdevs_operational": 3, 00:19:05.979 "process": { 00:19:05.979 "type": "rebuild", 00:19:05.979 "target": "spare", 00:19:05.979 "progress": { 00:19:05.979 "blocks": 20480, 00:19:05.979 "percent": 16 00:19:05.979 } 00:19:05.979 }, 00:19:05.979 "base_bdevs_list": [ 00:19:05.979 { 00:19:05.979 "name": "spare", 00:19:05.979 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:05.979 "is_configured": true, 00:19:05.979 "data_offset": 2048, 00:19:05.979 "data_size": 63488 00:19:05.979 }, 00:19:05.979 { 00:19:05.979 "name": "BaseBdev2", 00:19:05.979 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:05.979 "is_configured": true, 00:19:05.979 "data_offset": 2048, 00:19:05.979 "data_size": 63488 00:19:05.979 }, 00:19:05.979 { 00:19:05.979 "name": "BaseBdev3", 00:19:05.979 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:05.979 "is_configured": true, 00:19:05.979 "data_offset": 2048, 00:19:05.979 "data_size": 63488 00:19:05.979 } 00:19:05.979 ] 00:19:05.979 }' 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.979 06:28:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.979 [2024-11-26 06:28:49.967892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.979 [2024-11-26 06:28:50.045483] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.979 [2024-11-26 06:28:50.045564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.979 [2024-11-26 06:28:50.045581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.979 [2024-11-26 06:28:50.045593] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.979 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.239 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.239 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.239 "name": "raid_bdev1", 00:19:06.239 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:06.239 "strip_size_kb": 64, 00:19:06.239 "state": "online", 00:19:06.239 "raid_level": "raid5f", 00:19:06.239 "superblock": true, 00:19:06.239 "num_base_bdevs": 3, 00:19:06.239 "num_base_bdevs_discovered": 2, 00:19:06.239 "num_base_bdevs_operational": 2, 00:19:06.239 "base_bdevs_list": [ 00:19:06.239 { 00:19:06.239 "name": null, 00:19:06.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.239 "is_configured": false, 00:19:06.239 "data_offset": 0, 00:19:06.239 "data_size": 63488 00:19:06.239 }, 00:19:06.239 { 00:19:06.239 "name": "BaseBdev2", 00:19:06.239 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:06.239 "is_configured": true, 00:19:06.239 "data_offset": 2048, 00:19:06.239 "data_size": 63488 00:19:06.239 }, 00:19:06.239 { 00:19:06.239 "name": "BaseBdev3", 00:19:06.239 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:06.239 "is_configured": true, 00:19:06.239 "data_offset": 2048, 00:19:06.239 "data_size": 63488 00:19:06.239 } 00:19:06.239 ] 00:19:06.239 }' 00:19:06.239 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.239 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.500 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:06.500 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.500 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.500 [2024-11-26 06:28:50.557357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:06.500 [2024-11-26 06:28:50.557448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:06.500 [2024-11-26 06:28:50.557477] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:06.500 [2024-11-26 06:28:50.557496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:06.500 [2024-11-26 06:28:50.558159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:06.500 [2024-11-26 06:28:50.558196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:06.500 [2024-11-26 06:28:50.558337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:06.500 [2024-11-26 06:28:50.558368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:06.500 [2024-11-26 06:28:50.558381] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:06.500 [2024-11-26 06:28:50.558410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:06.500 [2024-11-26 06:28:50.576562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:19:06.500 spare 00:19:06.500 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.500 06:28:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:06.500 [2024-11-26 06:28:50.585504] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.881 "name": "raid_bdev1", 00:19:07.881 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:07.881 "strip_size_kb": 64, 00:19:07.881 "state": "online", 00:19:07.881 "raid_level": "raid5f", 00:19:07.881 "superblock": true, 00:19:07.881 "num_base_bdevs": 3, 00:19:07.881 "num_base_bdevs_discovered": 3, 00:19:07.881 "num_base_bdevs_operational": 3, 00:19:07.881 "process": { 00:19:07.881 "type": "rebuild", 00:19:07.881 "target": "spare", 00:19:07.881 "progress": { 00:19:07.881 "blocks": 18432, 00:19:07.881 "percent": 14 00:19:07.881 } 00:19:07.881 }, 00:19:07.881 "base_bdevs_list": [ 00:19:07.881 { 00:19:07.881 "name": "spare", 00:19:07.881 "uuid": "49671067-0dc5-5eae-8de5-3415b1ec8a78", 00:19:07.881 "is_configured": true, 00:19:07.881 "data_offset": 2048, 00:19:07.881 "data_size": 63488 00:19:07.881 }, 00:19:07.881 { 00:19:07.881 "name": "BaseBdev2", 00:19:07.881 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:07.881 "is_configured": true, 00:19:07.881 "data_offset": 2048, 00:19:07.881 "data_size": 63488 00:19:07.881 }, 00:19:07.881 { 00:19:07.881 "name": "BaseBdev3", 00:19:07.881 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:07.881 "is_configured": true, 00:19:07.881 "data_offset": 2048, 00:19:07.881 "data_size": 63488 00:19:07.881 } 00:19:07.881 ] 00:19:07.881 }' 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.881 [2024-11-26 06:28:51.740919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.881 [2024-11-26 06:28:51.799817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:07.881 [2024-11-26 06:28:51.799893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.881 [2024-11-26 06:28:51.799915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:07.881 [2024-11-26 06:28:51.799923] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.881 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.882 "name": "raid_bdev1", 00:19:07.882 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:07.882 "strip_size_kb": 64, 00:19:07.882 "state": "online", 00:19:07.882 "raid_level": "raid5f", 00:19:07.882 "superblock": true, 00:19:07.882 "num_base_bdevs": 3, 00:19:07.882 "num_base_bdevs_discovered": 2, 00:19:07.882 "num_base_bdevs_operational": 2, 00:19:07.882 "base_bdevs_list": [ 00:19:07.882 { 00:19:07.882 "name": null, 00:19:07.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.882 "is_configured": false, 00:19:07.882 "data_offset": 0, 00:19:07.882 "data_size": 63488 00:19:07.882 }, 00:19:07.882 { 00:19:07.882 "name": "BaseBdev2", 00:19:07.882 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:07.882 "is_configured": true, 00:19:07.882 "data_offset": 2048, 00:19:07.882 "data_size": 63488 00:19:07.882 }, 00:19:07.882 { 00:19:07.882 "name": "BaseBdev3", 00:19:07.882 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:07.882 "is_configured": true, 00:19:07.882 "data_offset": 2048, 00:19:07.882 "data_size": 63488 00:19:07.882 } 00:19:07.882 ] 00:19:07.882 }' 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.882 06:28:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.451 "name": "raid_bdev1", 00:19:08.451 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:08.451 "strip_size_kb": 64, 00:19:08.451 "state": "online", 00:19:08.451 "raid_level": "raid5f", 00:19:08.451 "superblock": true, 00:19:08.451 "num_base_bdevs": 3, 00:19:08.451 "num_base_bdevs_discovered": 2, 00:19:08.451 "num_base_bdevs_operational": 2, 00:19:08.451 "base_bdevs_list": [ 00:19:08.451 { 00:19:08.451 "name": null, 00:19:08.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.451 "is_configured": false, 00:19:08.451 "data_offset": 0, 00:19:08.451 "data_size": 63488 00:19:08.451 }, 00:19:08.451 { 00:19:08.451 "name": "BaseBdev2", 00:19:08.451 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:08.451 "is_configured": true, 00:19:08.451 "data_offset": 2048, 00:19:08.451 "data_size": 63488 00:19:08.451 }, 00:19:08.451 { 00:19:08.451 "name": "BaseBdev3", 00:19:08.451 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:08.451 "is_configured": true, 00:19:08.451 "data_offset": 2048, 00:19:08.451 "data_size": 63488 00:19:08.451 } 00:19:08.451 ] 00:19:08.451 }' 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.451 [2024-11-26 06:28:52.435933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:08.451 [2024-11-26 06:28:52.436007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:08.451 [2024-11-26 06:28:52.436058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:08.451 [2024-11-26 06:28:52.436083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:08.451 [2024-11-26 06:28:52.436750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:08.451 [2024-11-26 06:28:52.436784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:08.451 [2024-11-26 06:28:52.436898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:08.451 [2024-11-26 06:28:52.436939] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.451 [2024-11-26 06:28:52.436970] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:08.451 [2024-11-26 06:28:52.436983] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:08.451 BaseBdev1 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.451 06:28:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.390 "name": "raid_bdev1", 00:19:09.390 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:09.390 "strip_size_kb": 64, 00:19:09.390 "state": "online", 00:19:09.390 "raid_level": "raid5f", 00:19:09.390 "superblock": true, 00:19:09.390 "num_base_bdevs": 3, 00:19:09.390 "num_base_bdevs_discovered": 2, 00:19:09.390 "num_base_bdevs_operational": 2, 00:19:09.390 "base_bdevs_list": [ 00:19:09.390 { 00:19:09.390 "name": null, 00:19:09.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.390 "is_configured": false, 00:19:09.390 "data_offset": 0, 00:19:09.390 "data_size": 63488 00:19:09.390 }, 00:19:09.390 { 00:19:09.390 "name": "BaseBdev2", 00:19:09.390 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:09.390 "is_configured": true, 00:19:09.390 "data_offset": 2048, 00:19:09.390 "data_size": 63488 00:19:09.390 }, 00:19:09.390 { 00:19:09.390 "name": "BaseBdev3", 00:19:09.390 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:09.390 "is_configured": true, 00:19:09.390 "data_offset": 2048, 00:19:09.390 "data_size": 63488 00:19:09.390 } 00:19:09.390 ] 00:19:09.390 }' 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.390 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.960 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.961 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.961 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.961 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.961 "name": "raid_bdev1", 00:19:09.961 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:09.961 "strip_size_kb": 64, 00:19:09.961 "state": "online", 00:19:09.961 "raid_level": "raid5f", 00:19:09.961 "superblock": true, 00:19:09.961 "num_base_bdevs": 3, 00:19:09.961 "num_base_bdevs_discovered": 2, 00:19:09.961 "num_base_bdevs_operational": 2, 00:19:09.961 "base_bdevs_list": [ 00:19:09.961 { 00:19:09.961 "name": null, 00:19:09.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.961 "is_configured": false, 00:19:09.961 "data_offset": 0, 00:19:09.961 "data_size": 63488 00:19:09.961 }, 00:19:09.961 { 00:19:09.961 "name": "BaseBdev2", 00:19:09.961 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:09.961 "is_configured": true, 00:19:09.961 "data_offset": 2048, 00:19:09.961 "data_size": 63488 00:19:09.961 }, 00:19:09.961 { 00:19:09.961 "name": "BaseBdev3", 00:19:09.961 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:09.961 "is_configured": true, 00:19:09.961 "data_offset": 2048, 00:19:09.961 "data_size": 63488 00:19:09.961 } 00:19:09.961 ] 00:19:09.961 }' 00:19:09.961 06:28:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.961 [2024-11-26 06:28:54.081296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.961 [2024-11-26 06:28:54.081576] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:09.961 [2024-11-26 06:28:54.081647] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:09.961 request: 00:19:09.961 { 00:19:09.961 "base_bdev": "BaseBdev1", 00:19:09.961 "raid_bdev": "raid_bdev1", 00:19:09.961 "method": "bdev_raid_add_base_bdev", 00:19:09.961 "req_id": 1 00:19:09.961 } 00:19:09.961 Got JSON-RPC error response 00:19:09.961 response: 00:19:09.961 { 00:19:09.961 "code": -22, 00:19:09.961 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:09.961 } 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.961 06:28:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.341 "name": "raid_bdev1", 00:19:11.341 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:11.341 "strip_size_kb": 64, 00:19:11.341 "state": "online", 00:19:11.341 "raid_level": "raid5f", 00:19:11.341 "superblock": true, 00:19:11.341 "num_base_bdevs": 3, 00:19:11.341 "num_base_bdevs_discovered": 2, 00:19:11.341 "num_base_bdevs_operational": 2, 00:19:11.341 "base_bdevs_list": [ 00:19:11.341 { 00:19:11.341 "name": null, 00:19:11.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.341 "is_configured": false, 00:19:11.341 "data_offset": 0, 00:19:11.341 "data_size": 63488 00:19:11.341 }, 00:19:11.341 { 00:19:11.341 "name": "BaseBdev2", 00:19:11.341 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:11.341 "is_configured": true, 00:19:11.341 "data_offset": 2048, 00:19:11.341 "data_size": 63488 00:19:11.341 }, 00:19:11.341 { 00:19:11.341 "name": "BaseBdev3", 00:19:11.341 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:11.341 "is_configured": true, 00:19:11.341 "data_offset": 2048, 00:19:11.341 "data_size": 63488 00:19:11.341 } 00:19:11.341 ] 00:19:11.341 }' 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.341 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.600 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.601 "name": "raid_bdev1", 00:19:11.601 "uuid": "fc8244dc-ca97-490b-9779-056c1f988219", 00:19:11.601 "strip_size_kb": 64, 00:19:11.601 "state": "online", 00:19:11.601 "raid_level": "raid5f", 00:19:11.601 "superblock": true, 00:19:11.601 "num_base_bdevs": 3, 00:19:11.601 "num_base_bdevs_discovered": 2, 00:19:11.601 "num_base_bdevs_operational": 2, 00:19:11.601 "base_bdevs_list": [ 00:19:11.601 { 00:19:11.601 "name": null, 00:19:11.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.601 "is_configured": false, 00:19:11.601 "data_offset": 0, 00:19:11.601 "data_size": 63488 00:19:11.601 }, 00:19:11.601 { 00:19:11.601 "name": "BaseBdev2", 00:19:11.601 "uuid": "5452ccf4-845a-522a-a89c-15f1bd4c3ce7", 00:19:11.601 "is_configured": true, 00:19:11.601 "data_offset": 2048, 00:19:11.601 "data_size": 63488 00:19:11.601 }, 00:19:11.601 { 00:19:11.601 "name": "BaseBdev3", 00:19:11.601 "uuid": "9776b8a7-f46e-5c7b-8ed2-374824caaf1c", 00:19:11.601 "is_configured": true, 00:19:11.601 "data_offset": 2048, 00:19:11.601 "data_size": 63488 00:19:11.601 } 00:19:11.601 ] 00:19:11.601 }' 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82589 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82589 ']' 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82589 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.601 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82589 00:19:11.860 killing process with pid 82589 00:19:11.860 Received shutdown signal, test time was about 60.000000 seconds 00:19:11.860 00:19:11.860 Latency(us) 00:19:11.860 [2024-11-26T06:28:55.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.860 [2024-11-26T06:28:55.997Z] =================================================================================================================== 00:19:11.860 [2024-11-26T06:28:55.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.860 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.860 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.860 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82589' 00:19:11.860 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82589 00:19:11.860 [2024-11-26 06:28:55.744143] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:11.860 06:28:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82589 00:19:11.860 [2024-11-26 06:28:55.744315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:11.860 [2024-11-26 06:28:55.744438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:11.860 [2024-11-26 06:28:55.744455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:12.118 [2024-11-26 06:28:56.181142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:13.492 ************************************ 00:19:13.492 END TEST raid5f_rebuild_test_sb 00:19:13.492 ************************************ 00:19:13.492 06:28:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:13.492 00:19:13.492 real 0m23.777s 00:19:13.492 user 0m30.287s 00:19:13.492 sys 0m3.001s 00:19:13.493 06:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.493 06:28:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.493 06:28:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:13.493 06:28:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:19:13.493 06:28:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:13.493 06:28:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.493 06:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:13.493 ************************************ 00:19:13.493 START TEST raid5f_state_function_test 00:19:13.493 ************************************ 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83343 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83343' 00:19:13.493 Process raid pid: 83343 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83343 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83343 ']' 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.493 06:28:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.493 [2024-11-26 06:28:57.559566] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:19:13.493 [2024-11-26 06:28:57.559830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.752 [2024-11-26 06:28:57.739981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.752 [2024-11-26 06:28:57.882043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.010 [2024-11-26 06:28:58.128489] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.010 [2024-11-26 06:28:58.128664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.268 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.268 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:14.268 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:14.268 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.268 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.528 [2024-11-26 06:28:58.401408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.528 [2024-11-26 06:28:58.401477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.528 [2024-11-26 06:28:58.401489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.528 [2024-11-26 06:28:58.401499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.528 [2024-11-26 06:28:58.401506] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.528 [2024-11-26 06:28:58.401516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.528 [2024-11-26 06:28:58.401522] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.528 [2024-11-26 06:28:58.401532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.528 "name": "Existed_Raid", 00:19:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.528 "strip_size_kb": 64, 00:19:14.528 "state": "configuring", 00:19:14.528 "raid_level": "raid5f", 00:19:14.528 "superblock": false, 00:19:14.528 "num_base_bdevs": 4, 00:19:14.528 "num_base_bdevs_discovered": 0, 00:19:14.528 "num_base_bdevs_operational": 4, 00:19:14.528 "base_bdevs_list": [ 00:19:14.528 { 00:19:14.528 "name": "BaseBdev1", 00:19:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.528 "is_configured": false, 00:19:14.528 "data_offset": 0, 00:19:14.528 "data_size": 0 00:19:14.528 }, 00:19:14.528 { 00:19:14.528 "name": "BaseBdev2", 00:19:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.528 "is_configured": false, 00:19:14.528 "data_offset": 0, 00:19:14.528 "data_size": 0 00:19:14.528 }, 00:19:14.528 { 00:19:14.528 "name": "BaseBdev3", 00:19:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.528 "is_configured": false, 00:19:14.528 "data_offset": 0, 00:19:14.528 "data_size": 0 00:19:14.528 }, 00:19:14.528 { 00:19:14.528 "name": "BaseBdev4", 00:19:14.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.528 "is_configured": false, 00:19:14.528 "data_offset": 0, 00:19:14.528 "data_size": 0 00:19:14.528 } 00:19:14.528 ] 00:19:14.528 }' 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.528 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.788 [2024-11-26 06:28:58.892546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.788 [2024-11-26 06:28:58.892597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.788 [2024-11-26 06:28:58.900512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:14.788 [2024-11-26 06:28:58.900562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:14.788 [2024-11-26 06:28:58.900573] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:14.788 [2024-11-26 06:28:58.900583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:14.788 [2024-11-26 06:28:58.900590] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:14.788 [2024-11-26 06:28:58.900600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:14.788 [2024-11-26 06:28:58.900607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:14.788 [2024-11-26 06:28:58.900616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.788 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.047 [2024-11-26 06:28:58.951212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.047 BaseBdev1 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.047 [ 00:19:15.047 { 00:19:15.047 "name": "BaseBdev1", 00:19:15.047 "aliases": [ 00:19:15.047 "47325a83-68fb-42fb-9aa9-f023ec6bfbe1" 00:19:15.047 ], 00:19:15.047 "product_name": "Malloc disk", 00:19:15.047 "block_size": 512, 00:19:15.047 "num_blocks": 65536, 00:19:15.047 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:15.047 "assigned_rate_limits": { 00:19:15.047 "rw_ios_per_sec": 0, 00:19:15.047 "rw_mbytes_per_sec": 0, 00:19:15.047 "r_mbytes_per_sec": 0, 00:19:15.047 "w_mbytes_per_sec": 0 00:19:15.047 }, 00:19:15.047 "claimed": true, 00:19:15.047 "claim_type": "exclusive_write", 00:19:15.047 "zoned": false, 00:19:15.047 "supported_io_types": { 00:19:15.047 "read": true, 00:19:15.047 "write": true, 00:19:15.047 "unmap": true, 00:19:15.047 "flush": true, 00:19:15.047 "reset": true, 00:19:15.047 "nvme_admin": false, 00:19:15.047 "nvme_io": false, 00:19:15.047 "nvme_io_md": false, 00:19:15.047 "write_zeroes": true, 00:19:15.047 "zcopy": true, 00:19:15.047 "get_zone_info": false, 00:19:15.047 "zone_management": false, 00:19:15.047 "zone_append": false, 00:19:15.047 "compare": false, 00:19:15.047 "compare_and_write": false, 00:19:15.047 "abort": true, 00:19:15.047 "seek_hole": false, 00:19:15.047 "seek_data": false, 00:19:15.047 "copy": true, 00:19:15.047 "nvme_iov_md": false 00:19:15.047 }, 00:19:15.047 "memory_domains": [ 00:19:15.047 { 00:19:15.047 "dma_device_id": "system", 00:19:15.047 "dma_device_type": 1 00:19:15.047 }, 00:19:15.047 { 00:19:15.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.047 "dma_device_type": 2 00:19:15.047 } 00:19:15.047 ], 00:19:15.047 "driver_specific": {} 00:19:15.047 } 00:19:15.047 ] 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.047 06:28:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.047 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.047 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.047 "name": "Existed_Raid", 00:19:15.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.047 "strip_size_kb": 64, 00:19:15.047 "state": "configuring", 00:19:15.047 "raid_level": "raid5f", 00:19:15.047 "superblock": false, 00:19:15.047 "num_base_bdevs": 4, 00:19:15.047 "num_base_bdevs_discovered": 1, 00:19:15.047 "num_base_bdevs_operational": 4, 00:19:15.047 "base_bdevs_list": [ 00:19:15.047 { 00:19:15.047 "name": "BaseBdev1", 00:19:15.047 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:15.047 "is_configured": true, 00:19:15.047 "data_offset": 0, 00:19:15.047 "data_size": 65536 00:19:15.047 }, 00:19:15.047 { 00:19:15.047 "name": "BaseBdev2", 00:19:15.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.047 "is_configured": false, 00:19:15.047 "data_offset": 0, 00:19:15.047 "data_size": 0 00:19:15.047 }, 00:19:15.047 { 00:19:15.047 "name": "BaseBdev3", 00:19:15.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.047 "is_configured": false, 00:19:15.047 "data_offset": 0, 00:19:15.047 "data_size": 0 00:19:15.047 }, 00:19:15.047 { 00:19:15.047 "name": "BaseBdev4", 00:19:15.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.047 "is_configured": false, 00:19:15.047 "data_offset": 0, 00:19:15.047 "data_size": 0 00:19:15.047 } 00:19:15.047 ] 00:19:15.047 }' 00:19:15.047 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.047 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 [2024-11-26 06:28:59.450413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.612 [2024-11-26 06:28:59.450478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.612 [2024-11-26 06:28:59.462439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.612 [2024-11-26 06:28:59.464709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.612 [2024-11-26 06:28:59.464756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.612 [2024-11-26 06:28:59.464766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.612 [2024-11-26 06:28:59.464777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.612 [2024-11-26 06:28:59.464784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:15.612 [2024-11-26 06:28:59.464793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.612 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.613 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.613 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.613 "name": "Existed_Raid", 00:19:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.613 "strip_size_kb": 64, 00:19:15.613 "state": "configuring", 00:19:15.613 "raid_level": "raid5f", 00:19:15.613 "superblock": false, 00:19:15.613 "num_base_bdevs": 4, 00:19:15.613 "num_base_bdevs_discovered": 1, 00:19:15.613 "num_base_bdevs_operational": 4, 00:19:15.613 "base_bdevs_list": [ 00:19:15.613 { 00:19:15.613 "name": "BaseBdev1", 00:19:15.613 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:15.613 "is_configured": true, 00:19:15.613 "data_offset": 0, 00:19:15.613 "data_size": 65536 00:19:15.613 }, 00:19:15.613 { 00:19:15.613 "name": "BaseBdev2", 00:19:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.613 "is_configured": false, 00:19:15.613 "data_offset": 0, 00:19:15.613 "data_size": 0 00:19:15.613 }, 00:19:15.613 { 00:19:15.613 "name": "BaseBdev3", 00:19:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.613 "is_configured": false, 00:19:15.613 "data_offset": 0, 00:19:15.613 "data_size": 0 00:19:15.613 }, 00:19:15.613 { 00:19:15.613 "name": "BaseBdev4", 00:19:15.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.613 "is_configured": false, 00:19:15.613 "data_offset": 0, 00:19:15.613 "data_size": 0 00:19:15.613 } 00:19:15.613 ] 00:19:15.613 }' 00:19:15.613 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.613 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.871 [2024-11-26 06:28:59.945106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:15.871 BaseBdev2 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.871 [ 00:19:15.871 { 00:19:15.871 "name": "BaseBdev2", 00:19:15.871 "aliases": [ 00:19:15.871 "eacd262a-4f6b-423e-92ce-893f27d10928" 00:19:15.871 ], 00:19:15.871 "product_name": "Malloc disk", 00:19:15.871 "block_size": 512, 00:19:15.871 "num_blocks": 65536, 00:19:15.871 "uuid": "eacd262a-4f6b-423e-92ce-893f27d10928", 00:19:15.871 "assigned_rate_limits": { 00:19:15.871 "rw_ios_per_sec": 0, 00:19:15.871 "rw_mbytes_per_sec": 0, 00:19:15.871 "r_mbytes_per_sec": 0, 00:19:15.871 "w_mbytes_per_sec": 0 00:19:15.871 }, 00:19:15.871 "claimed": true, 00:19:15.871 "claim_type": "exclusive_write", 00:19:15.871 "zoned": false, 00:19:15.871 "supported_io_types": { 00:19:15.871 "read": true, 00:19:15.871 "write": true, 00:19:15.871 "unmap": true, 00:19:15.871 "flush": true, 00:19:15.871 "reset": true, 00:19:15.871 "nvme_admin": false, 00:19:15.871 "nvme_io": false, 00:19:15.871 "nvme_io_md": false, 00:19:15.871 "write_zeroes": true, 00:19:15.871 "zcopy": true, 00:19:15.871 "get_zone_info": false, 00:19:15.871 "zone_management": false, 00:19:15.871 "zone_append": false, 00:19:15.871 "compare": false, 00:19:15.871 "compare_and_write": false, 00:19:15.871 "abort": true, 00:19:15.871 "seek_hole": false, 00:19:15.871 "seek_data": false, 00:19:15.871 "copy": true, 00:19:15.871 "nvme_iov_md": false 00:19:15.871 }, 00:19:15.871 "memory_domains": [ 00:19:15.871 { 00:19:15.871 "dma_device_id": "system", 00:19:15.871 "dma_device_type": 1 00:19:15.871 }, 00:19:15.871 { 00:19:15.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.871 "dma_device_type": 2 00:19:15.871 } 00:19:15.871 ], 00:19:15.871 "driver_specific": {} 00:19:15.871 } 00:19:15.871 ] 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.871 06:28:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.130 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.130 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.130 "name": "Existed_Raid", 00:19:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.130 "strip_size_kb": 64, 00:19:16.130 "state": "configuring", 00:19:16.130 "raid_level": "raid5f", 00:19:16.130 "superblock": false, 00:19:16.130 "num_base_bdevs": 4, 00:19:16.130 "num_base_bdevs_discovered": 2, 00:19:16.130 "num_base_bdevs_operational": 4, 00:19:16.130 "base_bdevs_list": [ 00:19:16.130 { 00:19:16.130 "name": "BaseBdev1", 00:19:16.130 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:16.130 "is_configured": true, 00:19:16.130 "data_offset": 0, 00:19:16.130 "data_size": 65536 00:19:16.130 }, 00:19:16.130 { 00:19:16.130 "name": "BaseBdev2", 00:19:16.130 "uuid": "eacd262a-4f6b-423e-92ce-893f27d10928", 00:19:16.130 "is_configured": true, 00:19:16.130 "data_offset": 0, 00:19:16.130 "data_size": 65536 00:19:16.130 }, 00:19:16.130 { 00:19:16.130 "name": "BaseBdev3", 00:19:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.130 "is_configured": false, 00:19:16.130 "data_offset": 0, 00:19:16.130 "data_size": 0 00:19:16.130 }, 00:19:16.130 { 00:19:16.130 "name": "BaseBdev4", 00:19:16.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.130 "is_configured": false, 00:19:16.130 "data_offset": 0, 00:19:16.130 "data_size": 0 00:19:16.130 } 00:19:16.130 ] 00:19:16.130 }' 00:19:16.130 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.130 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.389 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:16.389 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.389 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.389 [2024-11-26 06:29:00.517347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:16.648 BaseBdev3 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.648 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.648 [ 00:19:16.648 { 00:19:16.648 "name": "BaseBdev3", 00:19:16.648 "aliases": [ 00:19:16.648 "83856086-5076-47d6-bb9d-785a02f26c68" 00:19:16.648 ], 00:19:16.648 "product_name": "Malloc disk", 00:19:16.648 "block_size": 512, 00:19:16.648 "num_blocks": 65536, 00:19:16.648 "uuid": "83856086-5076-47d6-bb9d-785a02f26c68", 00:19:16.648 "assigned_rate_limits": { 00:19:16.648 "rw_ios_per_sec": 0, 00:19:16.648 "rw_mbytes_per_sec": 0, 00:19:16.648 "r_mbytes_per_sec": 0, 00:19:16.648 "w_mbytes_per_sec": 0 00:19:16.648 }, 00:19:16.648 "claimed": true, 00:19:16.648 "claim_type": "exclusive_write", 00:19:16.648 "zoned": false, 00:19:16.648 "supported_io_types": { 00:19:16.648 "read": true, 00:19:16.648 "write": true, 00:19:16.648 "unmap": true, 00:19:16.648 "flush": true, 00:19:16.649 "reset": true, 00:19:16.649 "nvme_admin": false, 00:19:16.649 "nvme_io": false, 00:19:16.649 "nvme_io_md": false, 00:19:16.649 "write_zeroes": true, 00:19:16.649 "zcopy": true, 00:19:16.649 "get_zone_info": false, 00:19:16.649 "zone_management": false, 00:19:16.649 "zone_append": false, 00:19:16.649 "compare": false, 00:19:16.649 "compare_and_write": false, 00:19:16.649 "abort": true, 00:19:16.649 "seek_hole": false, 00:19:16.649 "seek_data": false, 00:19:16.649 "copy": true, 00:19:16.649 "nvme_iov_md": false 00:19:16.649 }, 00:19:16.649 "memory_domains": [ 00:19:16.649 { 00:19:16.649 "dma_device_id": "system", 00:19:16.649 "dma_device_type": 1 00:19:16.649 }, 00:19:16.649 { 00:19:16.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.649 "dma_device_type": 2 00:19:16.649 } 00:19:16.649 ], 00:19:16.649 "driver_specific": {} 00:19:16.649 } 00:19:16.649 ] 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.649 "name": "Existed_Raid", 00:19:16.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.649 "strip_size_kb": 64, 00:19:16.649 "state": "configuring", 00:19:16.649 "raid_level": "raid5f", 00:19:16.649 "superblock": false, 00:19:16.649 "num_base_bdevs": 4, 00:19:16.649 "num_base_bdevs_discovered": 3, 00:19:16.649 "num_base_bdevs_operational": 4, 00:19:16.649 "base_bdevs_list": [ 00:19:16.649 { 00:19:16.649 "name": "BaseBdev1", 00:19:16.649 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:16.649 "is_configured": true, 00:19:16.649 "data_offset": 0, 00:19:16.649 "data_size": 65536 00:19:16.649 }, 00:19:16.649 { 00:19:16.649 "name": "BaseBdev2", 00:19:16.649 "uuid": "eacd262a-4f6b-423e-92ce-893f27d10928", 00:19:16.649 "is_configured": true, 00:19:16.649 "data_offset": 0, 00:19:16.649 "data_size": 65536 00:19:16.649 }, 00:19:16.649 { 00:19:16.649 "name": "BaseBdev3", 00:19:16.649 "uuid": "83856086-5076-47d6-bb9d-785a02f26c68", 00:19:16.649 "is_configured": true, 00:19:16.649 "data_offset": 0, 00:19:16.649 "data_size": 65536 00:19:16.649 }, 00:19:16.649 { 00:19:16.649 "name": "BaseBdev4", 00:19:16.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.649 "is_configured": false, 00:19:16.649 "data_offset": 0, 00:19:16.649 "data_size": 0 00:19:16.649 } 00:19:16.649 ] 00:19:16.649 }' 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.649 06:29:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.908 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:16.909 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.909 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.171 [2024-11-26 06:29:01.072408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:17.171 [2024-11-26 06:29:01.072626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:17.171 [2024-11-26 06:29:01.072657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:17.171 [2024-11-26 06:29:01.073027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:17.171 [2024-11-26 06:29:01.080671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:17.171 [2024-11-26 06:29:01.080741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:17.171 [2024-11-26 06:29:01.081206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.171 BaseBdev4 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.171 [ 00:19:17.171 { 00:19:17.171 "name": "BaseBdev4", 00:19:17.171 "aliases": [ 00:19:17.171 "b1c61586-ce2f-4494-9461-57bd01a37325" 00:19:17.171 ], 00:19:17.171 "product_name": "Malloc disk", 00:19:17.171 "block_size": 512, 00:19:17.171 "num_blocks": 65536, 00:19:17.171 "uuid": "b1c61586-ce2f-4494-9461-57bd01a37325", 00:19:17.171 "assigned_rate_limits": { 00:19:17.171 "rw_ios_per_sec": 0, 00:19:17.171 "rw_mbytes_per_sec": 0, 00:19:17.171 "r_mbytes_per_sec": 0, 00:19:17.171 "w_mbytes_per_sec": 0 00:19:17.171 }, 00:19:17.171 "claimed": true, 00:19:17.171 "claim_type": "exclusive_write", 00:19:17.171 "zoned": false, 00:19:17.171 "supported_io_types": { 00:19:17.171 "read": true, 00:19:17.171 "write": true, 00:19:17.171 "unmap": true, 00:19:17.171 "flush": true, 00:19:17.171 "reset": true, 00:19:17.171 "nvme_admin": false, 00:19:17.171 "nvme_io": false, 00:19:17.171 "nvme_io_md": false, 00:19:17.171 "write_zeroes": true, 00:19:17.171 "zcopy": true, 00:19:17.171 "get_zone_info": false, 00:19:17.171 "zone_management": false, 00:19:17.171 "zone_append": false, 00:19:17.171 "compare": false, 00:19:17.171 "compare_and_write": false, 00:19:17.171 "abort": true, 00:19:17.171 "seek_hole": false, 00:19:17.171 "seek_data": false, 00:19:17.171 "copy": true, 00:19:17.171 "nvme_iov_md": false 00:19:17.171 }, 00:19:17.171 "memory_domains": [ 00:19:17.171 { 00:19:17.171 "dma_device_id": "system", 00:19:17.171 "dma_device_type": 1 00:19:17.171 }, 00:19:17.171 { 00:19:17.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.171 "dma_device_type": 2 00:19:17.171 } 00:19:17.171 ], 00:19:17.171 "driver_specific": {} 00:19:17.171 } 00:19:17.171 ] 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.171 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.171 "name": "Existed_Raid", 00:19:17.171 "uuid": "a0fe3d3b-6f98-4cc0-b7cf-3d668c22043a", 00:19:17.171 "strip_size_kb": 64, 00:19:17.171 "state": "online", 00:19:17.171 "raid_level": "raid5f", 00:19:17.171 "superblock": false, 00:19:17.171 "num_base_bdevs": 4, 00:19:17.171 "num_base_bdevs_discovered": 4, 00:19:17.171 "num_base_bdevs_operational": 4, 00:19:17.171 "base_bdevs_list": [ 00:19:17.171 { 00:19:17.171 "name": "BaseBdev1", 00:19:17.171 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:17.171 "is_configured": true, 00:19:17.171 "data_offset": 0, 00:19:17.171 "data_size": 65536 00:19:17.171 }, 00:19:17.171 { 00:19:17.171 "name": "BaseBdev2", 00:19:17.171 "uuid": "eacd262a-4f6b-423e-92ce-893f27d10928", 00:19:17.171 "is_configured": true, 00:19:17.171 "data_offset": 0, 00:19:17.171 "data_size": 65536 00:19:17.172 }, 00:19:17.172 { 00:19:17.172 "name": "BaseBdev3", 00:19:17.172 "uuid": "83856086-5076-47d6-bb9d-785a02f26c68", 00:19:17.172 "is_configured": true, 00:19:17.172 "data_offset": 0, 00:19:17.172 "data_size": 65536 00:19:17.172 }, 00:19:17.172 { 00:19:17.172 "name": "BaseBdev4", 00:19:17.172 "uuid": "b1c61586-ce2f-4494-9461-57bd01a37325", 00:19:17.172 "is_configured": true, 00:19:17.172 "data_offset": 0, 00:19:17.172 "data_size": 65536 00:19:17.172 } 00:19:17.172 ] 00:19:17.172 }' 00:19:17.172 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.172 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:17.743 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:17.744 [2024-11-26 06:29:01.590267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.744 "name": "Existed_Raid", 00:19:17.744 "aliases": [ 00:19:17.744 "a0fe3d3b-6f98-4cc0-b7cf-3d668c22043a" 00:19:17.744 ], 00:19:17.744 "product_name": "Raid Volume", 00:19:17.744 "block_size": 512, 00:19:17.744 "num_blocks": 196608, 00:19:17.744 "uuid": "a0fe3d3b-6f98-4cc0-b7cf-3d668c22043a", 00:19:17.744 "assigned_rate_limits": { 00:19:17.744 "rw_ios_per_sec": 0, 00:19:17.744 "rw_mbytes_per_sec": 0, 00:19:17.744 "r_mbytes_per_sec": 0, 00:19:17.744 "w_mbytes_per_sec": 0 00:19:17.744 }, 00:19:17.744 "claimed": false, 00:19:17.744 "zoned": false, 00:19:17.744 "supported_io_types": { 00:19:17.744 "read": true, 00:19:17.744 "write": true, 00:19:17.744 "unmap": false, 00:19:17.744 "flush": false, 00:19:17.744 "reset": true, 00:19:17.744 "nvme_admin": false, 00:19:17.744 "nvme_io": false, 00:19:17.744 "nvme_io_md": false, 00:19:17.744 "write_zeroes": true, 00:19:17.744 "zcopy": false, 00:19:17.744 "get_zone_info": false, 00:19:17.744 "zone_management": false, 00:19:17.744 "zone_append": false, 00:19:17.744 "compare": false, 00:19:17.744 "compare_and_write": false, 00:19:17.744 "abort": false, 00:19:17.744 "seek_hole": false, 00:19:17.744 "seek_data": false, 00:19:17.744 "copy": false, 00:19:17.744 "nvme_iov_md": false 00:19:17.744 }, 00:19:17.744 "driver_specific": { 00:19:17.744 "raid": { 00:19:17.744 "uuid": "a0fe3d3b-6f98-4cc0-b7cf-3d668c22043a", 00:19:17.744 "strip_size_kb": 64, 00:19:17.744 "state": "online", 00:19:17.744 "raid_level": "raid5f", 00:19:17.744 "superblock": false, 00:19:17.744 "num_base_bdevs": 4, 00:19:17.744 "num_base_bdevs_discovered": 4, 00:19:17.744 "num_base_bdevs_operational": 4, 00:19:17.744 "base_bdevs_list": [ 00:19:17.744 { 00:19:17.744 "name": "BaseBdev1", 00:19:17.744 "uuid": "47325a83-68fb-42fb-9aa9-f023ec6bfbe1", 00:19:17.744 "is_configured": true, 00:19:17.744 "data_offset": 0, 00:19:17.744 "data_size": 65536 00:19:17.744 }, 00:19:17.744 { 00:19:17.744 "name": "BaseBdev2", 00:19:17.744 "uuid": "eacd262a-4f6b-423e-92ce-893f27d10928", 00:19:17.744 "is_configured": true, 00:19:17.744 "data_offset": 0, 00:19:17.744 "data_size": 65536 00:19:17.744 }, 00:19:17.744 { 00:19:17.744 "name": "BaseBdev3", 00:19:17.744 "uuid": "83856086-5076-47d6-bb9d-785a02f26c68", 00:19:17.744 "is_configured": true, 00:19:17.744 "data_offset": 0, 00:19:17.744 "data_size": 65536 00:19:17.744 }, 00:19:17.744 { 00:19:17.744 "name": "BaseBdev4", 00:19:17.744 "uuid": "b1c61586-ce2f-4494-9461-57bd01a37325", 00:19:17.744 "is_configured": true, 00:19:17.744 "data_offset": 0, 00:19:17.744 "data_size": 65536 00:19:17.744 } 00:19:17.744 ] 00:19:17.744 } 00:19:17.744 } 00:19:17.744 }' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:17.744 BaseBdev2 00:19:17.744 BaseBdev3 00:19:17.744 BaseBdev4' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.004 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:18.004 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:18.004 06:29:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:18.004 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.004 06:29:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.004 [2024-11-26 06:29:01.897539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:18.004 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.005 "name": "Existed_Raid", 00:19:18.005 "uuid": "a0fe3d3b-6f98-4cc0-b7cf-3d668c22043a", 00:19:18.005 "strip_size_kb": 64, 00:19:18.005 "state": "online", 00:19:18.005 "raid_level": "raid5f", 00:19:18.005 "superblock": false, 00:19:18.005 "num_base_bdevs": 4, 00:19:18.005 "num_base_bdevs_discovered": 3, 00:19:18.005 "num_base_bdevs_operational": 3, 00:19:18.005 "base_bdevs_list": [ 00:19:18.005 { 00:19:18.005 "name": null, 00:19:18.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.005 "is_configured": false, 00:19:18.005 "data_offset": 0, 00:19:18.005 "data_size": 65536 00:19:18.005 }, 00:19:18.005 { 00:19:18.005 "name": "BaseBdev2", 00:19:18.005 "uuid": "eacd262a-4f6b-423e-92ce-893f27d10928", 00:19:18.005 "is_configured": true, 00:19:18.005 "data_offset": 0, 00:19:18.005 "data_size": 65536 00:19:18.005 }, 00:19:18.005 { 00:19:18.005 "name": "BaseBdev3", 00:19:18.005 "uuid": "83856086-5076-47d6-bb9d-785a02f26c68", 00:19:18.005 "is_configured": true, 00:19:18.005 "data_offset": 0, 00:19:18.005 "data_size": 65536 00:19:18.005 }, 00:19:18.005 { 00:19:18.005 "name": "BaseBdev4", 00:19:18.005 "uuid": "b1c61586-ce2f-4494-9461-57bd01a37325", 00:19:18.005 "is_configured": true, 00:19:18.005 "data_offset": 0, 00:19:18.005 "data_size": 65536 00:19:18.005 } 00:19:18.005 ] 00:19:18.005 }' 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.005 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.573 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.573 [2024-11-26 06:29:02.534661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.573 [2024-11-26 06:29:02.534823] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:18.574 [2024-11-26 06:29:02.642466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.574 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.574 [2024-11-26 06:29:02.702414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.832 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.832 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:18.832 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.833 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.833 [2024-11-26 06:29:02.869653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:18.833 [2024-11-26 06:29:02.869758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.092 06:29:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.092 BaseBdev2 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.092 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.092 [ 00:19:19.092 { 00:19:19.092 "name": "BaseBdev2", 00:19:19.092 "aliases": [ 00:19:19.092 "19131ea7-bff1-4c7f-91d1-4f029d36bb2c" 00:19:19.092 ], 00:19:19.092 "product_name": "Malloc disk", 00:19:19.092 "block_size": 512, 00:19:19.092 "num_blocks": 65536, 00:19:19.092 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:19.092 "assigned_rate_limits": { 00:19:19.092 "rw_ios_per_sec": 0, 00:19:19.092 "rw_mbytes_per_sec": 0, 00:19:19.092 "r_mbytes_per_sec": 0, 00:19:19.092 "w_mbytes_per_sec": 0 00:19:19.092 }, 00:19:19.092 "claimed": false, 00:19:19.092 "zoned": false, 00:19:19.092 "supported_io_types": { 00:19:19.092 "read": true, 00:19:19.092 "write": true, 00:19:19.092 "unmap": true, 00:19:19.092 "flush": true, 00:19:19.092 "reset": true, 00:19:19.092 "nvme_admin": false, 00:19:19.092 "nvme_io": false, 00:19:19.092 "nvme_io_md": false, 00:19:19.092 "write_zeroes": true, 00:19:19.092 "zcopy": true, 00:19:19.092 "get_zone_info": false, 00:19:19.092 "zone_management": false, 00:19:19.092 "zone_append": false, 00:19:19.092 "compare": false, 00:19:19.092 "compare_and_write": false, 00:19:19.092 "abort": true, 00:19:19.092 "seek_hole": false, 00:19:19.092 "seek_data": false, 00:19:19.092 "copy": true, 00:19:19.092 "nvme_iov_md": false 00:19:19.092 }, 00:19:19.092 "memory_domains": [ 00:19:19.092 { 00:19:19.092 "dma_device_id": "system", 00:19:19.092 "dma_device_type": 1 00:19:19.092 }, 00:19:19.092 { 00:19:19.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.092 "dma_device_type": 2 00:19:19.092 } 00:19:19.092 ], 00:19:19.092 "driver_specific": {} 00:19:19.092 } 00:19:19.092 ] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.093 BaseBdev3 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.093 [ 00:19:19.093 { 00:19:19.093 "name": "BaseBdev3", 00:19:19.093 "aliases": [ 00:19:19.093 "ad8e9bdc-c774-420f-a1d4-56220ee82ba1" 00:19:19.093 ], 00:19:19.093 "product_name": "Malloc disk", 00:19:19.093 "block_size": 512, 00:19:19.093 "num_blocks": 65536, 00:19:19.093 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:19.093 "assigned_rate_limits": { 00:19:19.093 "rw_ios_per_sec": 0, 00:19:19.093 "rw_mbytes_per_sec": 0, 00:19:19.093 "r_mbytes_per_sec": 0, 00:19:19.093 "w_mbytes_per_sec": 0 00:19:19.093 }, 00:19:19.093 "claimed": false, 00:19:19.093 "zoned": false, 00:19:19.093 "supported_io_types": { 00:19:19.093 "read": true, 00:19:19.093 "write": true, 00:19:19.093 "unmap": true, 00:19:19.093 "flush": true, 00:19:19.093 "reset": true, 00:19:19.093 "nvme_admin": false, 00:19:19.093 "nvme_io": false, 00:19:19.093 "nvme_io_md": false, 00:19:19.093 "write_zeroes": true, 00:19:19.093 "zcopy": true, 00:19:19.093 "get_zone_info": false, 00:19:19.093 "zone_management": false, 00:19:19.093 "zone_append": false, 00:19:19.093 "compare": false, 00:19:19.093 "compare_and_write": false, 00:19:19.093 "abort": true, 00:19:19.093 "seek_hole": false, 00:19:19.093 "seek_data": false, 00:19:19.093 "copy": true, 00:19:19.093 "nvme_iov_md": false 00:19:19.093 }, 00:19:19.093 "memory_domains": [ 00:19:19.093 { 00:19:19.093 "dma_device_id": "system", 00:19:19.093 "dma_device_type": 1 00:19:19.093 }, 00:19:19.093 { 00:19:19.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.093 "dma_device_type": 2 00:19:19.093 } 00:19:19.093 ], 00:19:19.093 "driver_specific": {} 00:19:19.093 } 00:19:19.093 ] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.093 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.352 BaseBdev4 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.352 [ 00:19:19.352 { 00:19:19.352 "name": "BaseBdev4", 00:19:19.352 "aliases": [ 00:19:19.352 "02f9e51d-710f-43e3-adff-751765a0c4f0" 00:19:19.352 ], 00:19:19.352 "product_name": "Malloc disk", 00:19:19.352 "block_size": 512, 00:19:19.352 "num_blocks": 65536, 00:19:19.352 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:19.352 "assigned_rate_limits": { 00:19:19.352 "rw_ios_per_sec": 0, 00:19:19.352 "rw_mbytes_per_sec": 0, 00:19:19.352 "r_mbytes_per_sec": 0, 00:19:19.352 "w_mbytes_per_sec": 0 00:19:19.352 }, 00:19:19.352 "claimed": false, 00:19:19.352 "zoned": false, 00:19:19.352 "supported_io_types": { 00:19:19.352 "read": true, 00:19:19.352 "write": true, 00:19:19.352 "unmap": true, 00:19:19.352 "flush": true, 00:19:19.352 "reset": true, 00:19:19.352 "nvme_admin": false, 00:19:19.352 "nvme_io": false, 00:19:19.352 "nvme_io_md": false, 00:19:19.352 "write_zeroes": true, 00:19:19.352 "zcopy": true, 00:19:19.352 "get_zone_info": false, 00:19:19.352 "zone_management": false, 00:19:19.352 "zone_append": false, 00:19:19.352 "compare": false, 00:19:19.352 "compare_and_write": false, 00:19:19.352 "abort": true, 00:19:19.352 "seek_hole": false, 00:19:19.352 "seek_data": false, 00:19:19.352 "copy": true, 00:19:19.352 "nvme_iov_md": false 00:19:19.352 }, 00:19:19.352 "memory_domains": [ 00:19:19.352 { 00:19:19.352 "dma_device_id": "system", 00:19:19.352 "dma_device_type": 1 00:19:19.352 }, 00:19:19.352 { 00:19:19.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.352 "dma_device_type": 2 00:19:19.352 } 00:19:19.352 ], 00:19:19.352 "driver_specific": {} 00:19:19.352 } 00:19:19.352 ] 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:19.352 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.353 [2024-11-26 06:29:03.296718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:19.353 [2024-11-26 06:29:03.296818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:19.353 [2024-11-26 06:29:03.296870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:19.353 [2024-11-26 06:29:03.299297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:19.353 [2024-11-26 06:29:03.299399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.353 "name": "Existed_Raid", 00:19:19.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.353 "strip_size_kb": 64, 00:19:19.353 "state": "configuring", 00:19:19.353 "raid_level": "raid5f", 00:19:19.353 "superblock": false, 00:19:19.353 "num_base_bdevs": 4, 00:19:19.353 "num_base_bdevs_discovered": 3, 00:19:19.353 "num_base_bdevs_operational": 4, 00:19:19.353 "base_bdevs_list": [ 00:19:19.353 { 00:19:19.353 "name": "BaseBdev1", 00:19:19.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.353 "is_configured": false, 00:19:19.353 "data_offset": 0, 00:19:19.353 "data_size": 0 00:19:19.353 }, 00:19:19.353 { 00:19:19.353 "name": "BaseBdev2", 00:19:19.353 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:19.353 "is_configured": true, 00:19:19.353 "data_offset": 0, 00:19:19.353 "data_size": 65536 00:19:19.353 }, 00:19:19.353 { 00:19:19.353 "name": "BaseBdev3", 00:19:19.353 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:19.353 "is_configured": true, 00:19:19.353 "data_offset": 0, 00:19:19.353 "data_size": 65536 00:19:19.353 }, 00:19:19.353 { 00:19:19.353 "name": "BaseBdev4", 00:19:19.353 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:19.353 "is_configured": true, 00:19:19.353 "data_offset": 0, 00:19:19.353 "data_size": 65536 00:19:19.353 } 00:19:19.353 ] 00:19:19.353 }' 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.353 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.612 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:19.612 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.612 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.871 [2024-11-26 06:29:03.743996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.871 "name": "Existed_Raid", 00:19:19.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.871 "strip_size_kb": 64, 00:19:19.871 "state": "configuring", 00:19:19.871 "raid_level": "raid5f", 00:19:19.871 "superblock": false, 00:19:19.871 "num_base_bdevs": 4, 00:19:19.871 "num_base_bdevs_discovered": 2, 00:19:19.871 "num_base_bdevs_operational": 4, 00:19:19.871 "base_bdevs_list": [ 00:19:19.871 { 00:19:19.871 "name": "BaseBdev1", 00:19:19.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.871 "is_configured": false, 00:19:19.871 "data_offset": 0, 00:19:19.871 "data_size": 0 00:19:19.871 }, 00:19:19.871 { 00:19:19.871 "name": null, 00:19:19.871 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:19.871 "is_configured": false, 00:19:19.871 "data_offset": 0, 00:19:19.871 "data_size": 65536 00:19:19.871 }, 00:19:19.871 { 00:19:19.871 "name": "BaseBdev3", 00:19:19.871 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:19.871 "is_configured": true, 00:19:19.871 "data_offset": 0, 00:19:19.871 "data_size": 65536 00:19:19.871 }, 00:19:19.871 { 00:19:19.871 "name": "BaseBdev4", 00:19:19.871 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:19.871 "is_configured": true, 00:19:19.871 "data_offset": 0, 00:19:19.871 "data_size": 65536 00:19:19.871 } 00:19:19.871 ] 00:19:19.871 }' 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.871 06:29:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.130 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.388 [2024-11-26 06:29:04.287503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.388 BaseBdev1 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.388 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.388 [ 00:19:20.388 { 00:19:20.388 "name": "BaseBdev1", 00:19:20.388 "aliases": [ 00:19:20.388 "b3259bd0-eb19-4bc9-a698-5c6a20a02a29" 00:19:20.388 ], 00:19:20.388 "product_name": "Malloc disk", 00:19:20.388 "block_size": 512, 00:19:20.388 "num_blocks": 65536, 00:19:20.388 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:20.388 "assigned_rate_limits": { 00:19:20.388 "rw_ios_per_sec": 0, 00:19:20.388 "rw_mbytes_per_sec": 0, 00:19:20.388 "r_mbytes_per_sec": 0, 00:19:20.388 "w_mbytes_per_sec": 0 00:19:20.388 }, 00:19:20.388 "claimed": true, 00:19:20.388 "claim_type": "exclusive_write", 00:19:20.388 "zoned": false, 00:19:20.388 "supported_io_types": { 00:19:20.388 "read": true, 00:19:20.388 "write": true, 00:19:20.388 "unmap": true, 00:19:20.388 "flush": true, 00:19:20.388 "reset": true, 00:19:20.388 "nvme_admin": false, 00:19:20.388 "nvme_io": false, 00:19:20.388 "nvme_io_md": false, 00:19:20.388 "write_zeroes": true, 00:19:20.388 "zcopy": true, 00:19:20.388 "get_zone_info": false, 00:19:20.388 "zone_management": false, 00:19:20.388 "zone_append": false, 00:19:20.388 "compare": false, 00:19:20.388 "compare_and_write": false, 00:19:20.388 "abort": true, 00:19:20.388 "seek_hole": false, 00:19:20.388 "seek_data": false, 00:19:20.388 "copy": true, 00:19:20.388 "nvme_iov_md": false 00:19:20.388 }, 00:19:20.389 "memory_domains": [ 00:19:20.389 { 00:19:20.389 "dma_device_id": "system", 00:19:20.389 "dma_device_type": 1 00:19:20.389 }, 00:19:20.389 { 00:19:20.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.389 "dma_device_type": 2 00:19:20.389 } 00:19:20.389 ], 00:19:20.389 "driver_specific": {} 00:19:20.389 } 00:19:20.389 ] 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.389 "name": "Existed_Raid", 00:19:20.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.389 "strip_size_kb": 64, 00:19:20.389 "state": "configuring", 00:19:20.389 "raid_level": "raid5f", 00:19:20.389 "superblock": false, 00:19:20.389 "num_base_bdevs": 4, 00:19:20.389 "num_base_bdevs_discovered": 3, 00:19:20.389 "num_base_bdevs_operational": 4, 00:19:20.389 "base_bdevs_list": [ 00:19:20.389 { 00:19:20.389 "name": "BaseBdev1", 00:19:20.389 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:20.389 "is_configured": true, 00:19:20.389 "data_offset": 0, 00:19:20.389 "data_size": 65536 00:19:20.389 }, 00:19:20.389 { 00:19:20.389 "name": null, 00:19:20.389 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:20.389 "is_configured": false, 00:19:20.389 "data_offset": 0, 00:19:20.389 "data_size": 65536 00:19:20.389 }, 00:19:20.389 { 00:19:20.389 "name": "BaseBdev3", 00:19:20.389 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:20.389 "is_configured": true, 00:19:20.389 "data_offset": 0, 00:19:20.389 "data_size": 65536 00:19:20.389 }, 00:19:20.389 { 00:19:20.389 "name": "BaseBdev4", 00:19:20.389 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:20.389 "is_configured": true, 00:19:20.389 "data_offset": 0, 00:19:20.389 "data_size": 65536 00:19:20.389 } 00:19:20.389 ] 00:19:20.389 }' 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.389 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 [2024-11-26 06:29:04.858622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.957 "name": "Existed_Raid", 00:19:20.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.957 "strip_size_kb": 64, 00:19:20.957 "state": "configuring", 00:19:20.957 "raid_level": "raid5f", 00:19:20.957 "superblock": false, 00:19:20.957 "num_base_bdevs": 4, 00:19:20.957 "num_base_bdevs_discovered": 2, 00:19:20.957 "num_base_bdevs_operational": 4, 00:19:20.957 "base_bdevs_list": [ 00:19:20.957 { 00:19:20.957 "name": "BaseBdev1", 00:19:20.957 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:20.957 "is_configured": true, 00:19:20.957 "data_offset": 0, 00:19:20.957 "data_size": 65536 00:19:20.957 }, 00:19:20.957 { 00:19:20.957 "name": null, 00:19:20.957 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:20.957 "is_configured": false, 00:19:20.957 "data_offset": 0, 00:19:20.957 "data_size": 65536 00:19:20.957 }, 00:19:20.957 { 00:19:20.957 "name": null, 00:19:20.957 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:20.957 "is_configured": false, 00:19:20.957 "data_offset": 0, 00:19:20.957 "data_size": 65536 00:19:20.957 }, 00:19:20.957 { 00:19:20.957 "name": "BaseBdev4", 00:19:20.957 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:20.957 "is_configured": true, 00:19:20.957 "data_offset": 0, 00:19:20.957 "data_size": 65536 00:19:20.957 } 00:19:20.957 ] 00:19:20.957 }' 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.957 06:29:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.218 [2024-11-26 06:29:05.309850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.218 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.477 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.477 "name": "Existed_Raid", 00:19:21.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.477 "strip_size_kb": 64, 00:19:21.477 "state": "configuring", 00:19:21.477 "raid_level": "raid5f", 00:19:21.477 "superblock": false, 00:19:21.477 "num_base_bdevs": 4, 00:19:21.477 "num_base_bdevs_discovered": 3, 00:19:21.477 "num_base_bdevs_operational": 4, 00:19:21.477 "base_bdevs_list": [ 00:19:21.477 { 00:19:21.477 "name": "BaseBdev1", 00:19:21.477 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:21.477 "is_configured": true, 00:19:21.477 "data_offset": 0, 00:19:21.477 "data_size": 65536 00:19:21.477 }, 00:19:21.477 { 00:19:21.477 "name": null, 00:19:21.477 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:21.477 "is_configured": false, 00:19:21.477 "data_offset": 0, 00:19:21.477 "data_size": 65536 00:19:21.477 }, 00:19:21.477 { 00:19:21.477 "name": "BaseBdev3", 00:19:21.477 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:21.477 "is_configured": true, 00:19:21.477 "data_offset": 0, 00:19:21.477 "data_size": 65536 00:19:21.477 }, 00:19:21.477 { 00:19:21.477 "name": "BaseBdev4", 00:19:21.477 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:21.477 "is_configured": true, 00:19:21.477 "data_offset": 0, 00:19:21.477 "data_size": 65536 00:19:21.477 } 00:19:21.477 ] 00:19:21.477 }' 00:19:21.477 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.477 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.736 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.736 [2024-11-26 06:29:05.797145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.996 "name": "Existed_Raid", 00:19:21.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.996 "strip_size_kb": 64, 00:19:21.996 "state": "configuring", 00:19:21.996 "raid_level": "raid5f", 00:19:21.996 "superblock": false, 00:19:21.996 "num_base_bdevs": 4, 00:19:21.996 "num_base_bdevs_discovered": 2, 00:19:21.996 "num_base_bdevs_operational": 4, 00:19:21.996 "base_bdevs_list": [ 00:19:21.996 { 00:19:21.996 "name": null, 00:19:21.996 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:21.996 "is_configured": false, 00:19:21.996 "data_offset": 0, 00:19:21.996 "data_size": 65536 00:19:21.996 }, 00:19:21.996 { 00:19:21.996 "name": null, 00:19:21.996 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:21.996 "is_configured": false, 00:19:21.996 "data_offset": 0, 00:19:21.996 "data_size": 65536 00:19:21.996 }, 00:19:21.996 { 00:19:21.996 "name": "BaseBdev3", 00:19:21.996 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:21.996 "is_configured": true, 00:19:21.996 "data_offset": 0, 00:19:21.996 "data_size": 65536 00:19:21.996 }, 00:19:21.996 { 00:19:21.996 "name": "BaseBdev4", 00:19:21.996 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:21.996 "is_configured": true, 00:19:21.996 "data_offset": 0, 00:19:21.996 "data_size": 65536 00:19:21.996 } 00:19:21.996 ] 00:19:21.996 }' 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.996 06:29:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.256 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.256 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:22.256 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 [2024-11-26 06:29:06.351605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.517 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.517 "name": "Existed_Raid", 00:19:22.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.517 "strip_size_kb": 64, 00:19:22.517 "state": "configuring", 00:19:22.517 "raid_level": "raid5f", 00:19:22.517 "superblock": false, 00:19:22.517 "num_base_bdevs": 4, 00:19:22.517 "num_base_bdevs_discovered": 3, 00:19:22.517 "num_base_bdevs_operational": 4, 00:19:22.517 "base_bdevs_list": [ 00:19:22.517 { 00:19:22.517 "name": null, 00:19:22.517 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:22.517 "is_configured": false, 00:19:22.517 "data_offset": 0, 00:19:22.517 "data_size": 65536 00:19:22.517 }, 00:19:22.517 { 00:19:22.517 "name": "BaseBdev2", 00:19:22.517 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:22.517 "is_configured": true, 00:19:22.517 "data_offset": 0, 00:19:22.517 "data_size": 65536 00:19:22.517 }, 00:19:22.517 { 00:19:22.517 "name": "BaseBdev3", 00:19:22.517 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:22.517 "is_configured": true, 00:19:22.517 "data_offset": 0, 00:19:22.517 "data_size": 65536 00:19:22.517 }, 00:19:22.517 { 00:19:22.517 "name": "BaseBdev4", 00:19:22.517 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:22.517 "is_configured": true, 00:19:22.517 "data_offset": 0, 00:19:22.517 "data_size": 65536 00:19:22.517 } 00:19:22.517 ] 00:19:22.517 }' 00:19:22.517 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.517 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b3259bd0-eb19-4bc9-a698-5c6a20a02a29 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.777 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.037 [2024-11-26 06:29:06.951143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:23.037 [2024-11-26 06:29:06.951313] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:23.037 [2024-11-26 06:29:06.951339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:23.037 [2024-11-26 06:29:06.951746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:23.037 [2024-11-26 06:29:06.958804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:23.037 [2024-11-26 06:29:06.958864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:23.037 [2024-11-26 06:29:06.959198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.037 NewBaseBdev 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.037 [ 00:19:23.037 { 00:19:23.037 "name": "NewBaseBdev", 00:19:23.037 "aliases": [ 00:19:23.037 "b3259bd0-eb19-4bc9-a698-5c6a20a02a29" 00:19:23.037 ], 00:19:23.037 "product_name": "Malloc disk", 00:19:23.037 "block_size": 512, 00:19:23.037 "num_blocks": 65536, 00:19:23.037 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:23.037 "assigned_rate_limits": { 00:19:23.037 "rw_ios_per_sec": 0, 00:19:23.037 "rw_mbytes_per_sec": 0, 00:19:23.037 "r_mbytes_per_sec": 0, 00:19:23.037 "w_mbytes_per_sec": 0 00:19:23.037 }, 00:19:23.037 "claimed": true, 00:19:23.037 "claim_type": "exclusive_write", 00:19:23.037 "zoned": false, 00:19:23.037 "supported_io_types": { 00:19:23.037 "read": true, 00:19:23.037 "write": true, 00:19:23.037 "unmap": true, 00:19:23.037 "flush": true, 00:19:23.037 "reset": true, 00:19:23.037 "nvme_admin": false, 00:19:23.037 "nvme_io": false, 00:19:23.037 "nvme_io_md": false, 00:19:23.037 "write_zeroes": true, 00:19:23.037 "zcopy": true, 00:19:23.037 "get_zone_info": false, 00:19:23.037 "zone_management": false, 00:19:23.037 "zone_append": false, 00:19:23.037 "compare": false, 00:19:23.037 "compare_and_write": false, 00:19:23.037 "abort": true, 00:19:23.037 "seek_hole": false, 00:19:23.037 "seek_data": false, 00:19:23.037 "copy": true, 00:19:23.037 "nvme_iov_md": false 00:19:23.037 }, 00:19:23.037 "memory_domains": [ 00:19:23.037 { 00:19:23.037 "dma_device_id": "system", 00:19:23.037 "dma_device_type": 1 00:19:23.037 }, 00:19:23.037 { 00:19:23.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.037 "dma_device_type": 2 00:19:23.037 } 00:19:23.037 ], 00:19:23.037 "driver_specific": {} 00:19:23.037 } 00:19:23.037 ] 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.037 06:29:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.037 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.038 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.038 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.038 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.038 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.038 "name": "Existed_Raid", 00:19:23.038 "uuid": "df260e0c-94e9-440a-8581-0a843aef8e81", 00:19:23.038 "strip_size_kb": 64, 00:19:23.038 "state": "online", 00:19:23.038 "raid_level": "raid5f", 00:19:23.038 "superblock": false, 00:19:23.038 "num_base_bdevs": 4, 00:19:23.038 "num_base_bdevs_discovered": 4, 00:19:23.038 "num_base_bdevs_operational": 4, 00:19:23.038 "base_bdevs_list": [ 00:19:23.038 { 00:19:23.038 "name": "NewBaseBdev", 00:19:23.038 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:23.038 "is_configured": true, 00:19:23.038 "data_offset": 0, 00:19:23.038 "data_size": 65536 00:19:23.038 }, 00:19:23.038 { 00:19:23.038 "name": "BaseBdev2", 00:19:23.038 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:23.038 "is_configured": true, 00:19:23.038 "data_offset": 0, 00:19:23.038 "data_size": 65536 00:19:23.038 }, 00:19:23.038 { 00:19:23.038 "name": "BaseBdev3", 00:19:23.038 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:23.038 "is_configured": true, 00:19:23.038 "data_offset": 0, 00:19:23.038 "data_size": 65536 00:19:23.038 }, 00:19:23.038 { 00:19:23.038 "name": "BaseBdev4", 00:19:23.038 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:23.038 "is_configured": true, 00:19:23.038 "data_offset": 0, 00:19:23.038 "data_size": 65536 00:19:23.038 } 00:19:23.038 ] 00:19:23.038 }' 00:19:23.038 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.038 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.298 [2024-11-26 06:29:07.400412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.298 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:23.558 "name": "Existed_Raid", 00:19:23.558 "aliases": [ 00:19:23.558 "df260e0c-94e9-440a-8581-0a843aef8e81" 00:19:23.558 ], 00:19:23.558 "product_name": "Raid Volume", 00:19:23.558 "block_size": 512, 00:19:23.558 "num_blocks": 196608, 00:19:23.558 "uuid": "df260e0c-94e9-440a-8581-0a843aef8e81", 00:19:23.558 "assigned_rate_limits": { 00:19:23.558 "rw_ios_per_sec": 0, 00:19:23.558 "rw_mbytes_per_sec": 0, 00:19:23.558 "r_mbytes_per_sec": 0, 00:19:23.558 "w_mbytes_per_sec": 0 00:19:23.558 }, 00:19:23.558 "claimed": false, 00:19:23.558 "zoned": false, 00:19:23.558 "supported_io_types": { 00:19:23.558 "read": true, 00:19:23.558 "write": true, 00:19:23.558 "unmap": false, 00:19:23.558 "flush": false, 00:19:23.558 "reset": true, 00:19:23.558 "nvme_admin": false, 00:19:23.558 "nvme_io": false, 00:19:23.558 "nvme_io_md": false, 00:19:23.558 "write_zeroes": true, 00:19:23.558 "zcopy": false, 00:19:23.558 "get_zone_info": false, 00:19:23.558 "zone_management": false, 00:19:23.558 "zone_append": false, 00:19:23.558 "compare": false, 00:19:23.558 "compare_and_write": false, 00:19:23.558 "abort": false, 00:19:23.558 "seek_hole": false, 00:19:23.558 "seek_data": false, 00:19:23.558 "copy": false, 00:19:23.558 "nvme_iov_md": false 00:19:23.558 }, 00:19:23.558 "driver_specific": { 00:19:23.558 "raid": { 00:19:23.558 "uuid": "df260e0c-94e9-440a-8581-0a843aef8e81", 00:19:23.558 "strip_size_kb": 64, 00:19:23.558 "state": "online", 00:19:23.558 "raid_level": "raid5f", 00:19:23.558 "superblock": false, 00:19:23.558 "num_base_bdevs": 4, 00:19:23.558 "num_base_bdevs_discovered": 4, 00:19:23.558 "num_base_bdevs_operational": 4, 00:19:23.558 "base_bdevs_list": [ 00:19:23.558 { 00:19:23.558 "name": "NewBaseBdev", 00:19:23.558 "uuid": "b3259bd0-eb19-4bc9-a698-5c6a20a02a29", 00:19:23.558 "is_configured": true, 00:19:23.558 "data_offset": 0, 00:19:23.558 "data_size": 65536 00:19:23.558 }, 00:19:23.558 { 00:19:23.558 "name": "BaseBdev2", 00:19:23.558 "uuid": "19131ea7-bff1-4c7f-91d1-4f029d36bb2c", 00:19:23.558 "is_configured": true, 00:19:23.558 "data_offset": 0, 00:19:23.558 "data_size": 65536 00:19:23.558 }, 00:19:23.558 { 00:19:23.558 "name": "BaseBdev3", 00:19:23.558 "uuid": "ad8e9bdc-c774-420f-a1d4-56220ee82ba1", 00:19:23.558 "is_configured": true, 00:19:23.558 "data_offset": 0, 00:19:23.558 "data_size": 65536 00:19:23.558 }, 00:19:23.558 { 00:19:23.558 "name": "BaseBdev4", 00:19:23.558 "uuid": "02f9e51d-710f-43e3-adff-751765a0c4f0", 00:19:23.558 "is_configured": true, 00:19:23.558 "data_offset": 0, 00:19:23.558 "data_size": 65536 00:19:23.558 } 00:19:23.558 ] 00:19:23.558 } 00:19:23.558 } 00:19:23.558 }' 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:23.558 BaseBdev2 00:19:23.558 BaseBdev3 00:19:23.558 BaseBdev4' 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.558 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.819 [2024-11-26 06:29:07.727565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:23.819 [2024-11-26 06:29:07.727600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.819 [2024-11-26 06:29:07.727703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.819 [2024-11-26 06:29:07.728043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.819 [2024-11-26 06:29:07.728067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83343 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83343 ']' 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83343 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83343 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83343' 00:19:23.819 killing process with pid 83343 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83343 00:19:23.819 [2024-11-26 06:29:07.765099] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:23.819 06:29:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83343 00:19:24.386 [2024-11-26 06:29:08.219805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.323 06:29:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:25.323 00:19:25.323 real 0m11.988s 00:19:25.323 user 0m18.669s 00:19:25.323 sys 0m2.430s 00:19:25.323 06:29:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.323 06:29:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.323 ************************************ 00:19:25.323 END TEST raid5f_state_function_test 00:19:25.323 ************************************ 00:19:25.583 06:29:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:25.583 06:29:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:25.583 06:29:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.583 06:29:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.583 ************************************ 00:19:25.583 START TEST raid5f_state_function_test_sb 00:19:25.583 ************************************ 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:25.583 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84016 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84016' 00:19:25.584 Process raid pid: 84016 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84016 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84016 ']' 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.584 06:29:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.584 [2024-11-26 06:29:09.651381] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:19:25.584 [2024-11-26 06:29:09.651630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.844 [2024-11-26 06:29:09.833578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.844 [2024-11-26 06:29:09.976020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.103 [2024-11-26 06:29:10.221369] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.104 [2024-11-26 06:29:10.221527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.674 [2024-11-26 06:29:10.516837] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.674 [2024-11-26 06:29:10.516900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.674 [2024-11-26 06:29:10.516934] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.674 [2024-11-26 06:29:10.516945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.674 [2024-11-26 06:29:10.516952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.674 [2024-11-26 06:29:10.516963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.674 [2024-11-26 06:29:10.516969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.674 [2024-11-26 06:29:10.516979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.674 "name": "Existed_Raid", 00:19:26.674 "uuid": "dd3a541c-5c31-4a40-86f1-a583fe5ceb0b", 00:19:26.674 "strip_size_kb": 64, 00:19:26.674 "state": "configuring", 00:19:26.674 "raid_level": "raid5f", 00:19:26.674 "superblock": true, 00:19:26.674 "num_base_bdevs": 4, 00:19:26.674 "num_base_bdevs_discovered": 0, 00:19:26.674 "num_base_bdevs_operational": 4, 00:19:26.674 "base_bdevs_list": [ 00:19:26.674 { 00:19:26.674 "name": "BaseBdev1", 00:19:26.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.674 "is_configured": false, 00:19:26.674 "data_offset": 0, 00:19:26.674 "data_size": 0 00:19:26.674 }, 00:19:26.674 { 00:19:26.674 "name": "BaseBdev2", 00:19:26.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.674 "is_configured": false, 00:19:26.674 "data_offset": 0, 00:19:26.674 "data_size": 0 00:19:26.674 }, 00:19:26.674 { 00:19:26.674 "name": "BaseBdev3", 00:19:26.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.674 "is_configured": false, 00:19:26.674 "data_offset": 0, 00:19:26.674 "data_size": 0 00:19:26.674 }, 00:19:26.674 { 00:19:26.674 "name": "BaseBdev4", 00:19:26.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.674 "is_configured": false, 00:19:26.674 "data_offset": 0, 00:19:26.674 "data_size": 0 00:19:26.674 } 00:19:26.674 ] 00:19:26.674 }' 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.674 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 [2024-11-26 06:29:10.964053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.935 [2024-11-26 06:29:10.964159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 [2024-11-26 06:29:10.976034] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:26.935 [2024-11-26 06:29:10.976128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:26.935 [2024-11-26 06:29:10.976157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.935 [2024-11-26 06:29:10.976180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.935 [2024-11-26 06:29:10.976198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.935 [2024-11-26 06:29:10.976219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.935 [2024-11-26 06:29:10.976236] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.935 [2024-11-26 06:29:10.976296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.935 06:29:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 [2024-11-26 06:29:11.028744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.935 BaseBdev1 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.935 [ 00:19:26.935 { 00:19:26.935 "name": "BaseBdev1", 00:19:26.935 "aliases": [ 00:19:26.935 "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80" 00:19:26.935 ], 00:19:26.935 "product_name": "Malloc disk", 00:19:26.935 "block_size": 512, 00:19:26.935 "num_blocks": 65536, 00:19:26.935 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:26.935 "assigned_rate_limits": { 00:19:26.935 "rw_ios_per_sec": 0, 00:19:26.935 "rw_mbytes_per_sec": 0, 00:19:26.935 "r_mbytes_per_sec": 0, 00:19:26.935 "w_mbytes_per_sec": 0 00:19:26.935 }, 00:19:26.935 "claimed": true, 00:19:26.935 "claim_type": "exclusive_write", 00:19:26.935 "zoned": false, 00:19:26.935 "supported_io_types": { 00:19:26.935 "read": true, 00:19:26.935 "write": true, 00:19:26.935 "unmap": true, 00:19:26.935 "flush": true, 00:19:26.935 "reset": true, 00:19:26.935 "nvme_admin": false, 00:19:26.935 "nvme_io": false, 00:19:26.935 "nvme_io_md": false, 00:19:26.935 "write_zeroes": true, 00:19:26.935 "zcopy": true, 00:19:26.935 "get_zone_info": false, 00:19:26.935 "zone_management": false, 00:19:26.935 "zone_append": false, 00:19:26.935 "compare": false, 00:19:26.935 "compare_and_write": false, 00:19:26.935 "abort": true, 00:19:26.935 "seek_hole": false, 00:19:26.935 "seek_data": false, 00:19:26.935 "copy": true, 00:19:26.935 "nvme_iov_md": false 00:19:26.935 }, 00:19:26.935 "memory_domains": [ 00:19:26.935 { 00:19:26.935 "dma_device_id": "system", 00:19:26.935 "dma_device_type": 1 00:19:26.935 }, 00:19:26.935 { 00:19:26.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:26.935 "dma_device_type": 2 00:19:26.935 } 00:19:26.935 ], 00:19:26.935 "driver_specific": {} 00:19:26.935 } 00:19:26.935 ] 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.935 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.195 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.195 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.195 "name": "Existed_Raid", 00:19:27.195 "uuid": "1c9f187e-0731-479b-8b82-4b0952d8c2a1", 00:19:27.195 "strip_size_kb": 64, 00:19:27.195 "state": "configuring", 00:19:27.195 "raid_level": "raid5f", 00:19:27.195 "superblock": true, 00:19:27.195 "num_base_bdevs": 4, 00:19:27.195 "num_base_bdevs_discovered": 1, 00:19:27.195 "num_base_bdevs_operational": 4, 00:19:27.195 "base_bdevs_list": [ 00:19:27.195 { 00:19:27.195 "name": "BaseBdev1", 00:19:27.195 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:27.195 "is_configured": true, 00:19:27.195 "data_offset": 2048, 00:19:27.195 "data_size": 63488 00:19:27.195 }, 00:19:27.195 { 00:19:27.195 "name": "BaseBdev2", 00:19:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.195 "is_configured": false, 00:19:27.195 "data_offset": 0, 00:19:27.195 "data_size": 0 00:19:27.195 }, 00:19:27.195 { 00:19:27.195 "name": "BaseBdev3", 00:19:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.195 "is_configured": false, 00:19:27.195 "data_offset": 0, 00:19:27.195 "data_size": 0 00:19:27.195 }, 00:19:27.195 { 00:19:27.195 "name": "BaseBdev4", 00:19:27.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.195 "is_configured": false, 00:19:27.195 "data_offset": 0, 00:19:27.195 "data_size": 0 00:19:27.195 } 00:19:27.195 ] 00:19:27.195 }' 00:19:27.195 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.195 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.455 [2024-11-26 06:29:11.424147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:27.455 [2024-11-26 06:29:11.424268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.455 [2024-11-26 06:29:11.436181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.455 [2024-11-26 06:29:11.438517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.455 [2024-11-26 06:29:11.438599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.455 [2024-11-26 06:29:11.438630] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:27.455 [2024-11-26 06:29:11.438657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:27.455 [2024-11-26 06:29:11.438676] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:27.455 [2024-11-26 06:29:11.438698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.455 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.455 "name": "Existed_Raid", 00:19:27.455 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:27.455 "strip_size_kb": 64, 00:19:27.455 "state": "configuring", 00:19:27.456 "raid_level": "raid5f", 00:19:27.456 "superblock": true, 00:19:27.456 "num_base_bdevs": 4, 00:19:27.456 "num_base_bdevs_discovered": 1, 00:19:27.456 "num_base_bdevs_operational": 4, 00:19:27.456 "base_bdevs_list": [ 00:19:27.456 { 00:19:27.456 "name": "BaseBdev1", 00:19:27.456 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:27.456 "is_configured": true, 00:19:27.456 "data_offset": 2048, 00:19:27.456 "data_size": 63488 00:19:27.456 }, 00:19:27.456 { 00:19:27.456 "name": "BaseBdev2", 00:19:27.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.456 "is_configured": false, 00:19:27.456 "data_offset": 0, 00:19:27.456 "data_size": 0 00:19:27.456 }, 00:19:27.456 { 00:19:27.456 "name": "BaseBdev3", 00:19:27.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.456 "is_configured": false, 00:19:27.456 "data_offset": 0, 00:19:27.456 "data_size": 0 00:19:27.456 }, 00:19:27.456 { 00:19:27.456 "name": "BaseBdev4", 00:19:27.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.456 "is_configured": false, 00:19:27.456 "data_offset": 0, 00:19:27.456 "data_size": 0 00:19:27.456 } 00:19:27.456 ] 00:19:27.456 }' 00:19:27.456 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.456 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.716 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:27.716 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.716 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.976 [2024-11-26 06:29:11.866169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.976 BaseBdev2 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.976 [ 00:19:27.976 { 00:19:27.976 "name": "BaseBdev2", 00:19:27.976 "aliases": [ 00:19:27.976 "13e52ad5-d4e7-47ec-ba75-bbf922064db5" 00:19:27.976 ], 00:19:27.976 "product_name": "Malloc disk", 00:19:27.976 "block_size": 512, 00:19:27.976 "num_blocks": 65536, 00:19:27.976 "uuid": "13e52ad5-d4e7-47ec-ba75-bbf922064db5", 00:19:27.976 "assigned_rate_limits": { 00:19:27.976 "rw_ios_per_sec": 0, 00:19:27.976 "rw_mbytes_per_sec": 0, 00:19:27.976 "r_mbytes_per_sec": 0, 00:19:27.976 "w_mbytes_per_sec": 0 00:19:27.976 }, 00:19:27.976 "claimed": true, 00:19:27.976 "claim_type": "exclusive_write", 00:19:27.976 "zoned": false, 00:19:27.976 "supported_io_types": { 00:19:27.976 "read": true, 00:19:27.976 "write": true, 00:19:27.976 "unmap": true, 00:19:27.976 "flush": true, 00:19:27.976 "reset": true, 00:19:27.976 "nvme_admin": false, 00:19:27.976 "nvme_io": false, 00:19:27.976 "nvme_io_md": false, 00:19:27.976 "write_zeroes": true, 00:19:27.976 "zcopy": true, 00:19:27.976 "get_zone_info": false, 00:19:27.976 "zone_management": false, 00:19:27.976 "zone_append": false, 00:19:27.976 "compare": false, 00:19:27.976 "compare_and_write": false, 00:19:27.976 "abort": true, 00:19:27.976 "seek_hole": false, 00:19:27.976 "seek_data": false, 00:19:27.976 "copy": true, 00:19:27.976 "nvme_iov_md": false 00:19:27.976 }, 00:19:27.976 "memory_domains": [ 00:19:27.976 { 00:19:27.976 "dma_device_id": "system", 00:19:27.976 "dma_device_type": 1 00:19:27.976 }, 00:19:27.976 { 00:19:27.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.976 "dma_device_type": 2 00:19:27.976 } 00:19:27.976 ], 00:19:27.976 "driver_specific": {} 00:19:27.976 } 00:19:27.976 ] 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.976 "name": "Existed_Raid", 00:19:27.976 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:27.976 "strip_size_kb": 64, 00:19:27.976 "state": "configuring", 00:19:27.976 "raid_level": "raid5f", 00:19:27.976 "superblock": true, 00:19:27.976 "num_base_bdevs": 4, 00:19:27.976 "num_base_bdevs_discovered": 2, 00:19:27.976 "num_base_bdevs_operational": 4, 00:19:27.976 "base_bdevs_list": [ 00:19:27.976 { 00:19:27.976 "name": "BaseBdev1", 00:19:27.976 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:27.976 "is_configured": true, 00:19:27.976 "data_offset": 2048, 00:19:27.976 "data_size": 63488 00:19:27.976 }, 00:19:27.976 { 00:19:27.976 "name": "BaseBdev2", 00:19:27.976 "uuid": "13e52ad5-d4e7-47ec-ba75-bbf922064db5", 00:19:27.976 "is_configured": true, 00:19:27.976 "data_offset": 2048, 00:19:27.976 "data_size": 63488 00:19:27.976 }, 00:19:27.976 { 00:19:27.976 "name": "BaseBdev3", 00:19:27.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.976 "is_configured": false, 00:19:27.976 "data_offset": 0, 00:19:27.976 "data_size": 0 00:19:27.976 }, 00:19:27.976 { 00:19:27.976 "name": "BaseBdev4", 00:19:27.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.976 "is_configured": false, 00:19:27.976 "data_offset": 0, 00:19:27.976 "data_size": 0 00:19:27.976 } 00:19:27.976 ] 00:19:27.976 }' 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.976 06:29:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:28.237 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.237 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.497 [2024-11-26 06:29:12.392147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.497 BaseBdev3 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.497 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.497 [ 00:19:28.497 { 00:19:28.497 "name": "BaseBdev3", 00:19:28.497 "aliases": [ 00:19:28.497 "fe4a76ef-bfe3-4245-a37e-c29e317aa741" 00:19:28.497 ], 00:19:28.497 "product_name": "Malloc disk", 00:19:28.497 "block_size": 512, 00:19:28.497 "num_blocks": 65536, 00:19:28.497 "uuid": "fe4a76ef-bfe3-4245-a37e-c29e317aa741", 00:19:28.497 "assigned_rate_limits": { 00:19:28.497 "rw_ios_per_sec": 0, 00:19:28.497 "rw_mbytes_per_sec": 0, 00:19:28.497 "r_mbytes_per_sec": 0, 00:19:28.497 "w_mbytes_per_sec": 0 00:19:28.497 }, 00:19:28.497 "claimed": true, 00:19:28.497 "claim_type": "exclusive_write", 00:19:28.497 "zoned": false, 00:19:28.497 "supported_io_types": { 00:19:28.497 "read": true, 00:19:28.497 "write": true, 00:19:28.497 "unmap": true, 00:19:28.497 "flush": true, 00:19:28.497 "reset": true, 00:19:28.497 "nvme_admin": false, 00:19:28.497 "nvme_io": false, 00:19:28.497 "nvme_io_md": false, 00:19:28.497 "write_zeroes": true, 00:19:28.497 "zcopy": true, 00:19:28.497 "get_zone_info": false, 00:19:28.497 "zone_management": false, 00:19:28.497 "zone_append": false, 00:19:28.497 "compare": false, 00:19:28.498 "compare_and_write": false, 00:19:28.498 "abort": true, 00:19:28.498 "seek_hole": false, 00:19:28.498 "seek_data": false, 00:19:28.498 "copy": true, 00:19:28.498 "nvme_iov_md": false 00:19:28.498 }, 00:19:28.498 "memory_domains": [ 00:19:28.498 { 00:19:28.498 "dma_device_id": "system", 00:19:28.498 "dma_device_type": 1 00:19:28.498 }, 00:19:28.498 { 00:19:28.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.498 "dma_device_type": 2 00:19:28.498 } 00:19:28.498 ], 00:19:28.498 "driver_specific": {} 00:19:28.498 } 00:19:28.498 ] 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.498 "name": "Existed_Raid", 00:19:28.498 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:28.498 "strip_size_kb": 64, 00:19:28.498 "state": "configuring", 00:19:28.498 "raid_level": "raid5f", 00:19:28.498 "superblock": true, 00:19:28.498 "num_base_bdevs": 4, 00:19:28.498 "num_base_bdevs_discovered": 3, 00:19:28.498 "num_base_bdevs_operational": 4, 00:19:28.498 "base_bdevs_list": [ 00:19:28.498 { 00:19:28.498 "name": "BaseBdev1", 00:19:28.498 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:28.498 "is_configured": true, 00:19:28.498 "data_offset": 2048, 00:19:28.498 "data_size": 63488 00:19:28.498 }, 00:19:28.498 { 00:19:28.498 "name": "BaseBdev2", 00:19:28.498 "uuid": "13e52ad5-d4e7-47ec-ba75-bbf922064db5", 00:19:28.498 "is_configured": true, 00:19:28.498 "data_offset": 2048, 00:19:28.498 "data_size": 63488 00:19:28.498 }, 00:19:28.498 { 00:19:28.498 "name": "BaseBdev3", 00:19:28.498 "uuid": "fe4a76ef-bfe3-4245-a37e-c29e317aa741", 00:19:28.498 "is_configured": true, 00:19:28.498 "data_offset": 2048, 00:19:28.498 "data_size": 63488 00:19:28.498 }, 00:19:28.498 { 00:19:28.498 "name": "BaseBdev4", 00:19:28.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.498 "is_configured": false, 00:19:28.498 "data_offset": 0, 00:19:28.498 "data_size": 0 00:19:28.498 } 00:19:28.498 ] 00:19:28.498 }' 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.498 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.757 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:28.757 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.757 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.017 [2024-11-26 06:29:12.912239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:29.017 [2024-11-26 06:29:12.912728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:29.017 [2024-11-26 06:29:12.912785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:29.017 [2024-11-26 06:29:12.913129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:29.017 BaseBdev4 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.017 [2024-11-26 06:29:12.920588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:29.017 [2024-11-26 06:29:12.920648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:29.017 [2024-11-26 06:29:12.920875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.017 [ 00:19:29.017 { 00:19:29.017 "name": "BaseBdev4", 00:19:29.017 "aliases": [ 00:19:29.017 "1e8cca76-cb4a-48d2-9fcc-1c308438b39f" 00:19:29.017 ], 00:19:29.017 "product_name": "Malloc disk", 00:19:29.017 "block_size": 512, 00:19:29.017 "num_blocks": 65536, 00:19:29.017 "uuid": "1e8cca76-cb4a-48d2-9fcc-1c308438b39f", 00:19:29.017 "assigned_rate_limits": { 00:19:29.017 "rw_ios_per_sec": 0, 00:19:29.017 "rw_mbytes_per_sec": 0, 00:19:29.017 "r_mbytes_per_sec": 0, 00:19:29.017 "w_mbytes_per_sec": 0 00:19:29.017 }, 00:19:29.017 "claimed": true, 00:19:29.017 "claim_type": "exclusive_write", 00:19:29.017 "zoned": false, 00:19:29.017 "supported_io_types": { 00:19:29.017 "read": true, 00:19:29.017 "write": true, 00:19:29.017 "unmap": true, 00:19:29.017 "flush": true, 00:19:29.017 "reset": true, 00:19:29.017 "nvme_admin": false, 00:19:29.017 "nvme_io": false, 00:19:29.017 "nvme_io_md": false, 00:19:29.017 "write_zeroes": true, 00:19:29.017 "zcopy": true, 00:19:29.017 "get_zone_info": false, 00:19:29.017 "zone_management": false, 00:19:29.017 "zone_append": false, 00:19:29.017 "compare": false, 00:19:29.017 "compare_and_write": false, 00:19:29.017 "abort": true, 00:19:29.017 "seek_hole": false, 00:19:29.017 "seek_data": false, 00:19:29.017 "copy": true, 00:19:29.017 "nvme_iov_md": false 00:19:29.017 }, 00:19:29.017 "memory_domains": [ 00:19:29.017 { 00:19:29.017 "dma_device_id": "system", 00:19:29.017 "dma_device_type": 1 00:19:29.017 }, 00:19:29.017 { 00:19:29.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.017 "dma_device_type": 2 00:19:29.017 } 00:19:29.017 ], 00:19:29.017 "driver_specific": {} 00:19:29.017 } 00:19:29.017 ] 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.017 06:29:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.017 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.017 "name": "Existed_Raid", 00:19:29.017 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:29.017 "strip_size_kb": 64, 00:19:29.017 "state": "online", 00:19:29.017 "raid_level": "raid5f", 00:19:29.017 "superblock": true, 00:19:29.017 "num_base_bdevs": 4, 00:19:29.017 "num_base_bdevs_discovered": 4, 00:19:29.017 "num_base_bdevs_operational": 4, 00:19:29.017 "base_bdevs_list": [ 00:19:29.017 { 00:19:29.017 "name": "BaseBdev1", 00:19:29.017 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:29.017 "is_configured": true, 00:19:29.017 "data_offset": 2048, 00:19:29.017 "data_size": 63488 00:19:29.017 }, 00:19:29.017 { 00:19:29.017 "name": "BaseBdev2", 00:19:29.018 "uuid": "13e52ad5-d4e7-47ec-ba75-bbf922064db5", 00:19:29.018 "is_configured": true, 00:19:29.018 "data_offset": 2048, 00:19:29.018 "data_size": 63488 00:19:29.018 }, 00:19:29.018 { 00:19:29.018 "name": "BaseBdev3", 00:19:29.018 "uuid": "fe4a76ef-bfe3-4245-a37e-c29e317aa741", 00:19:29.018 "is_configured": true, 00:19:29.018 "data_offset": 2048, 00:19:29.018 "data_size": 63488 00:19:29.018 }, 00:19:29.018 { 00:19:29.018 "name": "BaseBdev4", 00:19:29.018 "uuid": "1e8cca76-cb4a-48d2-9fcc-1c308438b39f", 00:19:29.018 "is_configured": true, 00:19:29.018 "data_offset": 2048, 00:19:29.018 "data_size": 63488 00:19:29.018 } 00:19:29.018 ] 00:19:29.018 }' 00:19:29.018 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.018 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.278 [2024-11-26 06:29:13.386025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.278 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.538 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:29.538 "name": "Existed_Raid", 00:19:29.538 "aliases": [ 00:19:29.538 "dc3fe275-e2d8-495b-9b3f-c885a4e3924f" 00:19:29.538 ], 00:19:29.538 "product_name": "Raid Volume", 00:19:29.539 "block_size": 512, 00:19:29.539 "num_blocks": 190464, 00:19:29.539 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:29.539 "assigned_rate_limits": { 00:19:29.539 "rw_ios_per_sec": 0, 00:19:29.539 "rw_mbytes_per_sec": 0, 00:19:29.539 "r_mbytes_per_sec": 0, 00:19:29.539 "w_mbytes_per_sec": 0 00:19:29.539 }, 00:19:29.539 "claimed": false, 00:19:29.539 "zoned": false, 00:19:29.539 "supported_io_types": { 00:19:29.539 "read": true, 00:19:29.539 "write": true, 00:19:29.539 "unmap": false, 00:19:29.539 "flush": false, 00:19:29.539 "reset": true, 00:19:29.539 "nvme_admin": false, 00:19:29.539 "nvme_io": false, 00:19:29.539 "nvme_io_md": false, 00:19:29.539 "write_zeroes": true, 00:19:29.539 "zcopy": false, 00:19:29.539 "get_zone_info": false, 00:19:29.539 "zone_management": false, 00:19:29.539 "zone_append": false, 00:19:29.539 "compare": false, 00:19:29.539 "compare_and_write": false, 00:19:29.539 "abort": false, 00:19:29.539 "seek_hole": false, 00:19:29.539 "seek_data": false, 00:19:29.539 "copy": false, 00:19:29.539 "nvme_iov_md": false 00:19:29.539 }, 00:19:29.539 "driver_specific": { 00:19:29.539 "raid": { 00:19:29.539 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:29.539 "strip_size_kb": 64, 00:19:29.539 "state": "online", 00:19:29.539 "raid_level": "raid5f", 00:19:29.539 "superblock": true, 00:19:29.539 "num_base_bdevs": 4, 00:19:29.539 "num_base_bdevs_discovered": 4, 00:19:29.539 "num_base_bdevs_operational": 4, 00:19:29.539 "base_bdevs_list": [ 00:19:29.539 { 00:19:29.539 "name": "BaseBdev1", 00:19:29.539 "uuid": "06a66e4a-5889-4cfb-8d4e-ff5c502f5f80", 00:19:29.539 "is_configured": true, 00:19:29.539 "data_offset": 2048, 00:19:29.539 "data_size": 63488 00:19:29.539 }, 00:19:29.539 { 00:19:29.539 "name": "BaseBdev2", 00:19:29.539 "uuid": "13e52ad5-d4e7-47ec-ba75-bbf922064db5", 00:19:29.539 "is_configured": true, 00:19:29.539 "data_offset": 2048, 00:19:29.539 "data_size": 63488 00:19:29.539 }, 00:19:29.539 { 00:19:29.539 "name": "BaseBdev3", 00:19:29.539 "uuid": "fe4a76ef-bfe3-4245-a37e-c29e317aa741", 00:19:29.539 "is_configured": true, 00:19:29.539 "data_offset": 2048, 00:19:29.539 "data_size": 63488 00:19:29.539 }, 00:19:29.539 { 00:19:29.539 "name": "BaseBdev4", 00:19:29.539 "uuid": "1e8cca76-cb4a-48d2-9fcc-1c308438b39f", 00:19:29.539 "is_configured": true, 00:19:29.539 "data_offset": 2048, 00:19:29.539 "data_size": 63488 00:19:29.539 } 00:19:29.539 ] 00:19:29.539 } 00:19:29.539 } 00:19:29.539 }' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:29.539 BaseBdev2 00:19:29.539 BaseBdev3 00:19:29.539 BaseBdev4' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.799 [2024-11-26 06:29:13.693300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.799 "name": "Existed_Raid", 00:19:29.799 "uuid": "dc3fe275-e2d8-495b-9b3f-c885a4e3924f", 00:19:29.799 "strip_size_kb": 64, 00:19:29.799 "state": "online", 00:19:29.799 "raid_level": "raid5f", 00:19:29.799 "superblock": true, 00:19:29.799 "num_base_bdevs": 4, 00:19:29.799 "num_base_bdevs_discovered": 3, 00:19:29.799 "num_base_bdevs_operational": 3, 00:19:29.799 "base_bdevs_list": [ 00:19:29.799 { 00:19:29.799 "name": null, 00:19:29.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.799 "is_configured": false, 00:19:29.799 "data_offset": 0, 00:19:29.799 "data_size": 63488 00:19:29.799 }, 00:19:29.799 { 00:19:29.799 "name": "BaseBdev2", 00:19:29.799 "uuid": "13e52ad5-d4e7-47ec-ba75-bbf922064db5", 00:19:29.799 "is_configured": true, 00:19:29.799 "data_offset": 2048, 00:19:29.799 "data_size": 63488 00:19:29.799 }, 00:19:29.799 { 00:19:29.799 "name": "BaseBdev3", 00:19:29.799 "uuid": "fe4a76ef-bfe3-4245-a37e-c29e317aa741", 00:19:29.799 "is_configured": true, 00:19:29.799 "data_offset": 2048, 00:19:29.799 "data_size": 63488 00:19:29.799 }, 00:19:29.799 { 00:19:29.799 "name": "BaseBdev4", 00:19:29.799 "uuid": "1e8cca76-cb4a-48d2-9fcc-1c308438b39f", 00:19:29.799 "is_configured": true, 00:19:29.799 "data_offset": 2048, 00:19:29.799 "data_size": 63488 00:19:29.799 } 00:19:29.799 ] 00:19:29.799 }' 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.799 06:29:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.368 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.369 [2024-11-26 06:29:14.256645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:30.369 [2024-11-26 06:29:14.256843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.369 [2024-11-26 06:29:14.365215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.369 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.369 [2024-11-26 06:29:14.433203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.629 [2024-11-26 06:29:14.609101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:30.629 [2024-11-26 06:29:14.609167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.629 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.889 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:30.889 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:30.889 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:30.889 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.890 BaseBdev2 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.890 [ 00:19:30.890 { 00:19:30.890 "name": "BaseBdev2", 00:19:30.890 "aliases": [ 00:19:30.890 "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c" 00:19:30.890 ], 00:19:30.890 "product_name": "Malloc disk", 00:19:30.890 "block_size": 512, 00:19:30.890 "num_blocks": 65536, 00:19:30.890 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:30.890 "assigned_rate_limits": { 00:19:30.890 "rw_ios_per_sec": 0, 00:19:30.890 "rw_mbytes_per_sec": 0, 00:19:30.890 "r_mbytes_per_sec": 0, 00:19:30.890 "w_mbytes_per_sec": 0 00:19:30.890 }, 00:19:30.890 "claimed": false, 00:19:30.890 "zoned": false, 00:19:30.890 "supported_io_types": { 00:19:30.890 "read": true, 00:19:30.890 "write": true, 00:19:30.890 "unmap": true, 00:19:30.890 "flush": true, 00:19:30.890 "reset": true, 00:19:30.890 "nvme_admin": false, 00:19:30.890 "nvme_io": false, 00:19:30.890 "nvme_io_md": false, 00:19:30.890 "write_zeroes": true, 00:19:30.890 "zcopy": true, 00:19:30.890 "get_zone_info": false, 00:19:30.890 "zone_management": false, 00:19:30.890 "zone_append": false, 00:19:30.890 "compare": false, 00:19:30.890 "compare_and_write": false, 00:19:30.890 "abort": true, 00:19:30.890 "seek_hole": false, 00:19:30.890 "seek_data": false, 00:19:30.890 "copy": true, 00:19:30.890 "nvme_iov_md": false 00:19:30.890 }, 00:19:30.890 "memory_domains": [ 00:19:30.890 { 00:19:30.890 "dma_device_id": "system", 00:19:30.890 "dma_device_type": 1 00:19:30.890 }, 00:19:30.890 { 00:19:30.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.890 "dma_device_type": 2 00:19:30.890 } 00:19:30.890 ], 00:19:30.890 "driver_specific": {} 00:19:30.890 } 00:19:30.890 ] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.890 BaseBdev3 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.890 [ 00:19:30.890 { 00:19:30.890 "name": "BaseBdev3", 00:19:30.890 "aliases": [ 00:19:30.890 "3bdb5949-33f6-4ae9-b556-e5242e8074db" 00:19:30.890 ], 00:19:30.890 "product_name": "Malloc disk", 00:19:30.890 "block_size": 512, 00:19:30.890 "num_blocks": 65536, 00:19:30.890 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:30.890 "assigned_rate_limits": { 00:19:30.890 "rw_ios_per_sec": 0, 00:19:30.890 "rw_mbytes_per_sec": 0, 00:19:30.890 "r_mbytes_per_sec": 0, 00:19:30.890 "w_mbytes_per_sec": 0 00:19:30.890 }, 00:19:30.890 "claimed": false, 00:19:30.890 "zoned": false, 00:19:30.890 "supported_io_types": { 00:19:30.890 "read": true, 00:19:30.890 "write": true, 00:19:30.890 "unmap": true, 00:19:30.890 "flush": true, 00:19:30.890 "reset": true, 00:19:30.890 "nvme_admin": false, 00:19:30.890 "nvme_io": false, 00:19:30.890 "nvme_io_md": false, 00:19:30.890 "write_zeroes": true, 00:19:30.890 "zcopy": true, 00:19:30.890 "get_zone_info": false, 00:19:30.890 "zone_management": false, 00:19:30.890 "zone_append": false, 00:19:30.890 "compare": false, 00:19:30.890 "compare_and_write": false, 00:19:30.890 "abort": true, 00:19:30.890 "seek_hole": false, 00:19:30.890 "seek_data": false, 00:19:30.890 "copy": true, 00:19:30.890 "nvme_iov_md": false 00:19:30.890 }, 00:19:30.890 "memory_domains": [ 00:19:30.890 { 00:19:30.890 "dma_device_id": "system", 00:19:30.890 "dma_device_type": 1 00:19:30.890 }, 00:19:30.890 { 00:19:30.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.890 "dma_device_type": 2 00:19:30.890 } 00:19:30.890 ], 00:19:30.890 "driver_specific": {} 00:19:30.890 } 00:19:30.890 ] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.890 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:30.891 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:30.891 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:30.891 06:29:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:30.891 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.891 06:29:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.891 BaseBdev4 00:19:30.891 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.891 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.166 [ 00:19:31.166 { 00:19:31.166 "name": "BaseBdev4", 00:19:31.166 "aliases": [ 00:19:31.166 "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36" 00:19:31.166 ], 00:19:31.166 "product_name": "Malloc disk", 00:19:31.166 "block_size": 512, 00:19:31.166 "num_blocks": 65536, 00:19:31.166 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:31.166 "assigned_rate_limits": { 00:19:31.166 "rw_ios_per_sec": 0, 00:19:31.166 "rw_mbytes_per_sec": 0, 00:19:31.166 "r_mbytes_per_sec": 0, 00:19:31.166 "w_mbytes_per_sec": 0 00:19:31.166 }, 00:19:31.166 "claimed": false, 00:19:31.166 "zoned": false, 00:19:31.166 "supported_io_types": { 00:19:31.166 "read": true, 00:19:31.166 "write": true, 00:19:31.166 "unmap": true, 00:19:31.166 "flush": true, 00:19:31.166 "reset": true, 00:19:31.166 "nvme_admin": false, 00:19:31.166 "nvme_io": false, 00:19:31.166 "nvme_io_md": false, 00:19:31.166 "write_zeroes": true, 00:19:31.166 "zcopy": true, 00:19:31.166 "get_zone_info": false, 00:19:31.166 "zone_management": false, 00:19:31.166 "zone_append": false, 00:19:31.166 "compare": false, 00:19:31.166 "compare_and_write": false, 00:19:31.166 "abort": true, 00:19:31.166 "seek_hole": false, 00:19:31.166 "seek_data": false, 00:19:31.166 "copy": true, 00:19:31.166 "nvme_iov_md": false 00:19:31.166 }, 00:19:31.166 "memory_domains": [ 00:19:31.166 { 00:19:31.166 "dma_device_id": "system", 00:19:31.166 "dma_device_type": 1 00:19:31.166 }, 00:19:31.166 { 00:19:31.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.166 "dma_device_type": 2 00:19:31.166 } 00:19:31.166 ], 00:19:31.166 "driver_specific": {} 00:19:31.166 } 00:19:31.166 ] 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.166 [2024-11-26 06:29:15.064222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:31.166 [2024-11-26 06:29:15.064329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:31.166 [2024-11-26 06:29:15.064359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:31.166 [2024-11-26 06:29:15.066514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:31.166 [2024-11-26 06:29:15.066582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.166 "name": "Existed_Raid", 00:19:31.166 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:31.166 "strip_size_kb": 64, 00:19:31.166 "state": "configuring", 00:19:31.166 "raid_level": "raid5f", 00:19:31.166 "superblock": true, 00:19:31.166 "num_base_bdevs": 4, 00:19:31.166 "num_base_bdevs_discovered": 3, 00:19:31.166 "num_base_bdevs_operational": 4, 00:19:31.166 "base_bdevs_list": [ 00:19:31.166 { 00:19:31.166 "name": "BaseBdev1", 00:19:31.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.166 "is_configured": false, 00:19:31.166 "data_offset": 0, 00:19:31.166 "data_size": 0 00:19:31.166 }, 00:19:31.166 { 00:19:31.166 "name": "BaseBdev2", 00:19:31.166 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:31.166 "is_configured": true, 00:19:31.166 "data_offset": 2048, 00:19:31.166 "data_size": 63488 00:19:31.166 }, 00:19:31.166 { 00:19:31.166 "name": "BaseBdev3", 00:19:31.166 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:31.166 "is_configured": true, 00:19:31.166 "data_offset": 2048, 00:19:31.166 "data_size": 63488 00:19:31.166 }, 00:19:31.166 { 00:19:31.166 "name": "BaseBdev4", 00:19:31.166 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:31.166 "is_configured": true, 00:19:31.166 "data_offset": 2048, 00:19:31.166 "data_size": 63488 00:19:31.166 } 00:19:31.166 ] 00:19:31.166 }' 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.166 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.425 [2024-11-26 06:29:15.535403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.425 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.426 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.686 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.686 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.686 "name": "Existed_Raid", 00:19:31.686 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:31.686 "strip_size_kb": 64, 00:19:31.686 "state": "configuring", 00:19:31.686 "raid_level": "raid5f", 00:19:31.686 "superblock": true, 00:19:31.686 "num_base_bdevs": 4, 00:19:31.686 "num_base_bdevs_discovered": 2, 00:19:31.686 "num_base_bdevs_operational": 4, 00:19:31.686 "base_bdevs_list": [ 00:19:31.686 { 00:19:31.686 "name": "BaseBdev1", 00:19:31.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.686 "is_configured": false, 00:19:31.686 "data_offset": 0, 00:19:31.686 "data_size": 0 00:19:31.686 }, 00:19:31.686 { 00:19:31.686 "name": null, 00:19:31.686 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:31.686 "is_configured": false, 00:19:31.686 "data_offset": 0, 00:19:31.686 "data_size": 63488 00:19:31.686 }, 00:19:31.686 { 00:19:31.686 "name": "BaseBdev3", 00:19:31.686 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:31.686 "is_configured": true, 00:19:31.686 "data_offset": 2048, 00:19:31.686 "data_size": 63488 00:19:31.686 }, 00:19:31.686 { 00:19:31.686 "name": "BaseBdev4", 00:19:31.686 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:31.686 "is_configured": true, 00:19:31.686 "data_offset": 2048, 00:19:31.686 "data_size": 63488 00:19:31.686 } 00:19:31.686 ] 00:19:31.686 }' 00:19:31.686 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.686 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.945 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:31.945 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.946 06:29:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.946 [2024-11-26 06:29:16.021840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:31.946 BaseBdev1 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.946 [ 00:19:31.946 { 00:19:31.946 "name": "BaseBdev1", 00:19:31.946 "aliases": [ 00:19:31.946 "7fcfb4c5-fc11-4161-bf52-d6eec8140b06" 00:19:31.946 ], 00:19:31.946 "product_name": "Malloc disk", 00:19:31.946 "block_size": 512, 00:19:31.946 "num_blocks": 65536, 00:19:31.946 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:31.946 "assigned_rate_limits": { 00:19:31.946 "rw_ios_per_sec": 0, 00:19:31.946 "rw_mbytes_per_sec": 0, 00:19:31.946 "r_mbytes_per_sec": 0, 00:19:31.946 "w_mbytes_per_sec": 0 00:19:31.946 }, 00:19:31.946 "claimed": true, 00:19:31.946 "claim_type": "exclusive_write", 00:19:31.946 "zoned": false, 00:19:31.946 "supported_io_types": { 00:19:31.946 "read": true, 00:19:31.946 "write": true, 00:19:31.946 "unmap": true, 00:19:31.946 "flush": true, 00:19:31.946 "reset": true, 00:19:31.946 "nvme_admin": false, 00:19:31.946 "nvme_io": false, 00:19:31.946 "nvme_io_md": false, 00:19:31.946 "write_zeroes": true, 00:19:31.946 "zcopy": true, 00:19:31.946 "get_zone_info": false, 00:19:31.946 "zone_management": false, 00:19:31.946 "zone_append": false, 00:19:31.946 "compare": false, 00:19:31.946 "compare_and_write": false, 00:19:31.946 "abort": true, 00:19:31.946 "seek_hole": false, 00:19:31.946 "seek_data": false, 00:19:31.946 "copy": true, 00:19:31.946 "nvme_iov_md": false 00:19:31.946 }, 00:19:31.946 "memory_domains": [ 00:19:31.946 { 00:19:31.946 "dma_device_id": "system", 00:19:31.946 "dma_device_type": 1 00:19:31.946 }, 00:19:31.946 { 00:19:31.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.946 "dma_device_type": 2 00:19:31.946 } 00:19:31.946 ], 00:19:31.946 "driver_specific": {} 00:19:31.946 } 00:19:31.946 ] 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.946 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.205 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.205 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.205 "name": "Existed_Raid", 00:19:32.205 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:32.205 "strip_size_kb": 64, 00:19:32.205 "state": "configuring", 00:19:32.205 "raid_level": "raid5f", 00:19:32.206 "superblock": true, 00:19:32.206 "num_base_bdevs": 4, 00:19:32.206 "num_base_bdevs_discovered": 3, 00:19:32.206 "num_base_bdevs_operational": 4, 00:19:32.206 "base_bdevs_list": [ 00:19:32.206 { 00:19:32.206 "name": "BaseBdev1", 00:19:32.206 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:32.206 "is_configured": true, 00:19:32.206 "data_offset": 2048, 00:19:32.206 "data_size": 63488 00:19:32.206 }, 00:19:32.206 { 00:19:32.206 "name": null, 00:19:32.206 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:32.206 "is_configured": false, 00:19:32.206 "data_offset": 0, 00:19:32.206 "data_size": 63488 00:19:32.206 }, 00:19:32.206 { 00:19:32.206 "name": "BaseBdev3", 00:19:32.206 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:32.206 "is_configured": true, 00:19:32.206 "data_offset": 2048, 00:19:32.206 "data_size": 63488 00:19:32.206 }, 00:19:32.206 { 00:19:32.206 "name": "BaseBdev4", 00:19:32.206 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:32.206 "is_configured": true, 00:19:32.206 "data_offset": 2048, 00:19:32.206 "data_size": 63488 00:19:32.206 } 00:19:32.206 ] 00:19:32.206 }' 00:19:32.206 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.206 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.466 [2024-11-26 06:29:16.537033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.466 "name": "Existed_Raid", 00:19:32.466 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:32.466 "strip_size_kb": 64, 00:19:32.466 "state": "configuring", 00:19:32.466 "raid_level": "raid5f", 00:19:32.466 "superblock": true, 00:19:32.466 "num_base_bdevs": 4, 00:19:32.466 "num_base_bdevs_discovered": 2, 00:19:32.466 "num_base_bdevs_operational": 4, 00:19:32.466 "base_bdevs_list": [ 00:19:32.466 { 00:19:32.466 "name": "BaseBdev1", 00:19:32.466 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:32.466 "is_configured": true, 00:19:32.466 "data_offset": 2048, 00:19:32.466 "data_size": 63488 00:19:32.466 }, 00:19:32.466 { 00:19:32.466 "name": null, 00:19:32.466 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:32.466 "is_configured": false, 00:19:32.466 "data_offset": 0, 00:19:32.466 "data_size": 63488 00:19:32.466 }, 00:19:32.466 { 00:19:32.466 "name": null, 00:19:32.466 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:32.466 "is_configured": false, 00:19:32.466 "data_offset": 0, 00:19:32.466 "data_size": 63488 00:19:32.466 }, 00:19:32.466 { 00:19:32.466 "name": "BaseBdev4", 00:19:32.466 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:32.466 "is_configured": true, 00:19:32.466 "data_offset": 2048, 00:19:32.466 "data_size": 63488 00:19:32.466 } 00:19:32.466 ] 00:19:32.466 }' 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.466 06:29:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.036 06:29:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.036 [2024-11-26 06:29:17.056270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.036 "name": "Existed_Raid", 00:19:33.036 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:33.036 "strip_size_kb": 64, 00:19:33.036 "state": "configuring", 00:19:33.036 "raid_level": "raid5f", 00:19:33.036 "superblock": true, 00:19:33.036 "num_base_bdevs": 4, 00:19:33.036 "num_base_bdevs_discovered": 3, 00:19:33.036 "num_base_bdevs_operational": 4, 00:19:33.036 "base_bdevs_list": [ 00:19:33.036 { 00:19:33.036 "name": "BaseBdev1", 00:19:33.036 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:33.036 "is_configured": true, 00:19:33.036 "data_offset": 2048, 00:19:33.036 "data_size": 63488 00:19:33.036 }, 00:19:33.036 { 00:19:33.036 "name": null, 00:19:33.036 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:33.036 "is_configured": false, 00:19:33.036 "data_offset": 0, 00:19:33.036 "data_size": 63488 00:19:33.036 }, 00:19:33.036 { 00:19:33.036 "name": "BaseBdev3", 00:19:33.036 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:33.036 "is_configured": true, 00:19:33.036 "data_offset": 2048, 00:19:33.036 "data_size": 63488 00:19:33.036 }, 00:19:33.036 { 00:19:33.036 "name": "BaseBdev4", 00:19:33.036 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:33.036 "is_configured": true, 00:19:33.036 "data_offset": 2048, 00:19:33.036 "data_size": 63488 00:19:33.036 } 00:19:33.036 ] 00:19:33.036 }' 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.036 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.615 [2024-11-26 06:29:17.567401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.615 "name": "Existed_Raid", 00:19:33.615 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:33.615 "strip_size_kb": 64, 00:19:33.615 "state": "configuring", 00:19:33.615 "raid_level": "raid5f", 00:19:33.615 "superblock": true, 00:19:33.615 "num_base_bdevs": 4, 00:19:33.615 "num_base_bdevs_discovered": 2, 00:19:33.615 "num_base_bdevs_operational": 4, 00:19:33.615 "base_bdevs_list": [ 00:19:33.615 { 00:19:33.615 "name": null, 00:19:33.615 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:33.615 "is_configured": false, 00:19:33.615 "data_offset": 0, 00:19:33.615 "data_size": 63488 00:19:33.615 }, 00:19:33.615 { 00:19:33.615 "name": null, 00:19:33.615 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:33.615 "is_configured": false, 00:19:33.615 "data_offset": 0, 00:19:33.615 "data_size": 63488 00:19:33.615 }, 00:19:33.615 { 00:19:33.615 "name": "BaseBdev3", 00:19:33.615 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:33.615 "is_configured": true, 00:19:33.615 "data_offset": 2048, 00:19:33.615 "data_size": 63488 00:19:33.615 }, 00:19:33.615 { 00:19:33.615 "name": "BaseBdev4", 00:19:33.615 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:33.615 "is_configured": true, 00:19:33.615 "data_offset": 2048, 00:19:33.615 "data_size": 63488 00:19:33.615 } 00:19:33.615 ] 00:19:33.615 }' 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.615 06:29:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.183 [2024-11-26 06:29:18.168650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.183 "name": "Existed_Raid", 00:19:34.183 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:34.183 "strip_size_kb": 64, 00:19:34.183 "state": "configuring", 00:19:34.183 "raid_level": "raid5f", 00:19:34.183 "superblock": true, 00:19:34.183 "num_base_bdevs": 4, 00:19:34.183 "num_base_bdevs_discovered": 3, 00:19:34.183 "num_base_bdevs_operational": 4, 00:19:34.183 "base_bdevs_list": [ 00:19:34.183 { 00:19:34.183 "name": null, 00:19:34.183 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:34.183 "is_configured": false, 00:19:34.183 "data_offset": 0, 00:19:34.183 "data_size": 63488 00:19:34.183 }, 00:19:34.183 { 00:19:34.183 "name": "BaseBdev2", 00:19:34.183 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:34.183 "is_configured": true, 00:19:34.183 "data_offset": 2048, 00:19:34.183 "data_size": 63488 00:19:34.183 }, 00:19:34.183 { 00:19:34.183 "name": "BaseBdev3", 00:19:34.183 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:34.183 "is_configured": true, 00:19:34.183 "data_offset": 2048, 00:19:34.183 "data_size": 63488 00:19:34.183 }, 00:19:34.183 { 00:19:34.183 "name": "BaseBdev4", 00:19:34.183 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:34.183 "is_configured": true, 00:19:34.183 "data_offset": 2048, 00:19:34.183 "data_size": 63488 00:19:34.183 } 00:19:34.183 ] 00:19:34.183 }' 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.183 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7fcfb4c5-fc11-4161-bf52-d6eec8140b06 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 [2024-11-26 06:29:18.739143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:34.754 [2024-11-26 06:29:18.739577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:34.754 [2024-11-26 06:29:18.739628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:34.754 [2024-11-26 06:29:18.739970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:34.754 NewBaseBdev 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 [2024-11-26 06:29:18.747609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:34.754 [2024-11-26 06:29:18.747673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:34.754 [2024-11-26 06:29:18.747916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 [ 00:19:34.754 { 00:19:34.754 "name": "NewBaseBdev", 00:19:34.754 "aliases": [ 00:19:34.754 "7fcfb4c5-fc11-4161-bf52-d6eec8140b06" 00:19:34.754 ], 00:19:34.754 "product_name": "Malloc disk", 00:19:34.754 "block_size": 512, 00:19:34.754 "num_blocks": 65536, 00:19:34.754 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:34.754 "assigned_rate_limits": { 00:19:34.754 "rw_ios_per_sec": 0, 00:19:34.754 "rw_mbytes_per_sec": 0, 00:19:34.754 "r_mbytes_per_sec": 0, 00:19:34.754 "w_mbytes_per_sec": 0 00:19:34.754 }, 00:19:34.754 "claimed": true, 00:19:34.754 "claim_type": "exclusive_write", 00:19:34.754 "zoned": false, 00:19:34.754 "supported_io_types": { 00:19:34.754 "read": true, 00:19:34.754 "write": true, 00:19:34.754 "unmap": true, 00:19:34.754 "flush": true, 00:19:34.754 "reset": true, 00:19:34.754 "nvme_admin": false, 00:19:34.754 "nvme_io": false, 00:19:34.754 "nvme_io_md": false, 00:19:34.754 "write_zeroes": true, 00:19:34.754 "zcopy": true, 00:19:34.754 "get_zone_info": false, 00:19:34.754 "zone_management": false, 00:19:34.754 "zone_append": false, 00:19:34.754 "compare": false, 00:19:34.754 "compare_and_write": false, 00:19:34.754 "abort": true, 00:19:34.754 "seek_hole": false, 00:19:34.754 "seek_data": false, 00:19:34.754 "copy": true, 00:19:34.754 "nvme_iov_md": false 00:19:34.754 }, 00:19:34.754 "memory_domains": [ 00:19:34.754 { 00:19:34.754 "dma_device_id": "system", 00:19:34.754 "dma_device_type": 1 00:19:34.754 }, 00:19:34.754 { 00:19:34.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.754 "dma_device_type": 2 00:19:34.754 } 00:19:34.754 ], 00:19:34.754 "driver_specific": {} 00:19:34.754 } 00:19:34.754 ] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.754 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.754 "name": "Existed_Raid", 00:19:34.754 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:34.754 "strip_size_kb": 64, 00:19:34.754 "state": "online", 00:19:34.754 "raid_level": "raid5f", 00:19:34.754 "superblock": true, 00:19:34.754 "num_base_bdevs": 4, 00:19:34.754 "num_base_bdevs_discovered": 4, 00:19:34.754 "num_base_bdevs_operational": 4, 00:19:34.754 "base_bdevs_list": [ 00:19:34.754 { 00:19:34.754 "name": "NewBaseBdev", 00:19:34.754 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:34.754 "is_configured": true, 00:19:34.754 "data_offset": 2048, 00:19:34.755 "data_size": 63488 00:19:34.755 }, 00:19:34.755 { 00:19:34.755 "name": "BaseBdev2", 00:19:34.755 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:34.755 "is_configured": true, 00:19:34.755 "data_offset": 2048, 00:19:34.755 "data_size": 63488 00:19:34.755 }, 00:19:34.755 { 00:19:34.755 "name": "BaseBdev3", 00:19:34.755 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:34.755 "is_configured": true, 00:19:34.755 "data_offset": 2048, 00:19:34.755 "data_size": 63488 00:19:34.755 }, 00:19:34.755 { 00:19:34.755 "name": "BaseBdev4", 00:19:34.755 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:34.755 "is_configured": true, 00:19:34.755 "data_offset": 2048, 00:19:34.755 "data_size": 63488 00:19:34.755 } 00:19:34.755 ] 00:19:34.755 }' 00:19:34.755 06:29:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.755 06:29:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.327 [2024-11-26 06:29:19.245442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.327 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.327 "name": "Existed_Raid", 00:19:35.327 "aliases": [ 00:19:35.327 "723804c7-957f-4e93-a0fe-56d2b0c99db8" 00:19:35.327 ], 00:19:35.328 "product_name": "Raid Volume", 00:19:35.328 "block_size": 512, 00:19:35.328 "num_blocks": 190464, 00:19:35.328 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:35.328 "assigned_rate_limits": { 00:19:35.328 "rw_ios_per_sec": 0, 00:19:35.328 "rw_mbytes_per_sec": 0, 00:19:35.328 "r_mbytes_per_sec": 0, 00:19:35.328 "w_mbytes_per_sec": 0 00:19:35.328 }, 00:19:35.328 "claimed": false, 00:19:35.328 "zoned": false, 00:19:35.328 "supported_io_types": { 00:19:35.328 "read": true, 00:19:35.328 "write": true, 00:19:35.328 "unmap": false, 00:19:35.328 "flush": false, 00:19:35.328 "reset": true, 00:19:35.328 "nvme_admin": false, 00:19:35.328 "nvme_io": false, 00:19:35.328 "nvme_io_md": false, 00:19:35.328 "write_zeroes": true, 00:19:35.328 "zcopy": false, 00:19:35.328 "get_zone_info": false, 00:19:35.328 "zone_management": false, 00:19:35.328 "zone_append": false, 00:19:35.328 "compare": false, 00:19:35.328 "compare_and_write": false, 00:19:35.328 "abort": false, 00:19:35.328 "seek_hole": false, 00:19:35.328 "seek_data": false, 00:19:35.328 "copy": false, 00:19:35.328 "nvme_iov_md": false 00:19:35.328 }, 00:19:35.328 "driver_specific": { 00:19:35.328 "raid": { 00:19:35.328 "uuid": "723804c7-957f-4e93-a0fe-56d2b0c99db8", 00:19:35.328 "strip_size_kb": 64, 00:19:35.328 "state": "online", 00:19:35.328 "raid_level": "raid5f", 00:19:35.328 "superblock": true, 00:19:35.328 "num_base_bdevs": 4, 00:19:35.328 "num_base_bdevs_discovered": 4, 00:19:35.328 "num_base_bdevs_operational": 4, 00:19:35.328 "base_bdevs_list": [ 00:19:35.328 { 00:19:35.328 "name": "NewBaseBdev", 00:19:35.328 "uuid": "7fcfb4c5-fc11-4161-bf52-d6eec8140b06", 00:19:35.328 "is_configured": true, 00:19:35.328 "data_offset": 2048, 00:19:35.328 "data_size": 63488 00:19:35.328 }, 00:19:35.328 { 00:19:35.328 "name": "BaseBdev2", 00:19:35.328 "uuid": "78b74fb4-82b0-470c-ad46-21a9f5d4cb2c", 00:19:35.328 "is_configured": true, 00:19:35.328 "data_offset": 2048, 00:19:35.328 "data_size": 63488 00:19:35.328 }, 00:19:35.328 { 00:19:35.328 "name": "BaseBdev3", 00:19:35.328 "uuid": "3bdb5949-33f6-4ae9-b556-e5242e8074db", 00:19:35.328 "is_configured": true, 00:19:35.328 "data_offset": 2048, 00:19:35.328 "data_size": 63488 00:19:35.328 }, 00:19:35.328 { 00:19:35.328 "name": "BaseBdev4", 00:19:35.328 "uuid": "1eb77ec8-864e-45e7-a9ba-7e4529fcfd36", 00:19:35.328 "is_configured": true, 00:19:35.328 "data_offset": 2048, 00:19:35.328 "data_size": 63488 00:19:35.328 } 00:19:35.328 ] 00:19:35.328 } 00:19:35.328 } 00:19:35.328 }' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:35.328 BaseBdev2 00:19:35.328 BaseBdev3 00:19:35.328 BaseBdev4' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.328 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:35.590 [2024-11-26 06:29:19.580636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.590 [2024-11-26 06:29:19.580733] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:35.590 [2024-11-26 06:29:19.580850] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:35.590 [2024-11-26 06:29:19.581274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:35.590 [2024-11-26 06:29:19.581292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84016 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84016 ']' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84016 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84016 00:19:35.590 killing process with pid 84016 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84016' 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84016 00:19:35.590 [2024-11-26 06:29:19.634103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:35.590 06:29:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84016 00:19:36.159 [2024-11-26 06:29:20.077565] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.539 ************************************ 00:19:37.539 END TEST raid5f_state_function_test_sb 00:19:37.539 ************************************ 00:19:37.539 06:29:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:37.539 00:19:37.539 real 0m11.820s 00:19:37.539 user 0m18.314s 00:19:37.539 sys 0m2.276s 00:19:37.539 06:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.539 06:29:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:37.539 06:29:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:37.539 06:29:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:37.539 06:29:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.539 06:29:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:37.539 ************************************ 00:19:37.539 START TEST raid5f_superblock_test 00:19:37.539 ************************************ 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84687 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:37.539 06:29:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84687 00:19:37.540 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84687 ']' 00:19:37.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.540 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.540 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.540 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.540 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.540 06:29:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.540 [2024-11-26 06:29:21.511168] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:19:37.540 [2024-11-26 06:29:21.511374] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84687 ] 00:19:37.799 [2024-11-26 06:29:21.694501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.799 [2024-11-26 06:29:21.835915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.059 [2024-11-26 06:29:22.076818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.059 [2024-11-26 06:29:22.077029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.319 malloc1 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.319 [2024-11-26 06:29:22.425758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:38.319 [2024-11-26 06:29:22.425909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.319 [2024-11-26 06:29:22.425988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:38.319 [2024-11-26 06:29:22.426032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.319 [2024-11-26 06:29:22.428611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.319 [2024-11-26 06:29:22.428686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:38.319 pt1 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.319 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 malloc2 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 [2024-11-26 06:29:22.491528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.580 [2024-11-26 06:29:22.491591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.580 [2024-11-26 06:29:22.491615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:38.580 [2024-11-26 06:29:22.491624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.580 [2024-11-26 06:29:22.494107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.580 [2024-11-26 06:29:22.494139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.580 pt2 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 malloc3 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 [2024-11-26 06:29:22.563748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:38.580 [2024-11-26 06:29:22.563855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.580 [2024-11-26 06:29:22.563915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:38.580 [2024-11-26 06:29:22.563954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.580 [2024-11-26 06:29:22.566657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.580 [2024-11-26 06:29:22.566730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:38.580 pt3 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 malloc4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 [2024-11-26 06:29:22.625378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:38.580 [2024-11-26 06:29:22.625496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.580 [2024-11-26 06:29:22.625540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:38.580 [2024-11-26 06:29:22.625601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.580 [2024-11-26 06:29:22.628217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.580 [2024-11-26 06:29:22.628285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:38.580 pt4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 [2024-11-26 06:29:22.637388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.580 [2024-11-26 06:29:22.639548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.580 [2024-11-26 06:29:22.639661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:38.580 [2024-11-26 06:29:22.639762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:38.580 [2024-11-26 06:29:22.640034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:38.580 [2024-11-26 06:29:22.640104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.580 [2024-11-26 06:29:22.640482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:38.580 [2024-11-26 06:29:22.648114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:38.580 [2024-11-26 06:29:22.648187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:38.580 [2024-11-26 06:29:22.648459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.580 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.580 "name": "raid_bdev1", 00:19:38.581 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:38.581 "strip_size_kb": 64, 00:19:38.581 "state": "online", 00:19:38.581 "raid_level": "raid5f", 00:19:38.581 "superblock": true, 00:19:38.581 "num_base_bdevs": 4, 00:19:38.581 "num_base_bdevs_discovered": 4, 00:19:38.581 "num_base_bdevs_operational": 4, 00:19:38.581 "base_bdevs_list": [ 00:19:38.581 { 00:19:38.581 "name": "pt1", 00:19:38.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.581 "is_configured": true, 00:19:38.581 "data_offset": 2048, 00:19:38.581 "data_size": 63488 00:19:38.581 }, 00:19:38.581 { 00:19:38.581 "name": "pt2", 00:19:38.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.581 "is_configured": true, 00:19:38.581 "data_offset": 2048, 00:19:38.581 "data_size": 63488 00:19:38.581 }, 00:19:38.581 { 00:19:38.581 "name": "pt3", 00:19:38.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.581 "is_configured": true, 00:19:38.581 "data_offset": 2048, 00:19:38.581 "data_size": 63488 00:19:38.581 }, 00:19:38.581 { 00:19:38.581 "name": "pt4", 00:19:38.581 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:38.581 "is_configured": true, 00:19:38.581 "data_offset": 2048, 00:19:38.581 "data_size": 63488 00:19:38.581 } 00:19:38.581 ] 00:19:38.581 }' 00:19:38.581 06:29:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.581 06:29:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.150 [2024-11-26 06:29:23.046051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.150 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:39.150 "name": "raid_bdev1", 00:19:39.150 "aliases": [ 00:19:39.150 "c424764b-d3c4-4129-b2ab-b219fbbad482" 00:19:39.150 ], 00:19:39.150 "product_name": "Raid Volume", 00:19:39.150 "block_size": 512, 00:19:39.150 "num_blocks": 190464, 00:19:39.150 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:39.150 "assigned_rate_limits": { 00:19:39.150 "rw_ios_per_sec": 0, 00:19:39.150 "rw_mbytes_per_sec": 0, 00:19:39.150 "r_mbytes_per_sec": 0, 00:19:39.150 "w_mbytes_per_sec": 0 00:19:39.150 }, 00:19:39.150 "claimed": false, 00:19:39.150 "zoned": false, 00:19:39.150 "supported_io_types": { 00:19:39.150 "read": true, 00:19:39.150 "write": true, 00:19:39.150 "unmap": false, 00:19:39.150 "flush": false, 00:19:39.150 "reset": true, 00:19:39.150 "nvme_admin": false, 00:19:39.150 "nvme_io": false, 00:19:39.150 "nvme_io_md": false, 00:19:39.150 "write_zeroes": true, 00:19:39.150 "zcopy": false, 00:19:39.150 "get_zone_info": false, 00:19:39.150 "zone_management": false, 00:19:39.150 "zone_append": false, 00:19:39.150 "compare": false, 00:19:39.150 "compare_and_write": false, 00:19:39.150 "abort": false, 00:19:39.150 "seek_hole": false, 00:19:39.150 "seek_data": false, 00:19:39.150 "copy": false, 00:19:39.150 "nvme_iov_md": false 00:19:39.150 }, 00:19:39.150 "driver_specific": { 00:19:39.150 "raid": { 00:19:39.150 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:39.150 "strip_size_kb": 64, 00:19:39.150 "state": "online", 00:19:39.150 "raid_level": "raid5f", 00:19:39.150 "superblock": true, 00:19:39.150 "num_base_bdevs": 4, 00:19:39.150 "num_base_bdevs_discovered": 4, 00:19:39.150 "num_base_bdevs_operational": 4, 00:19:39.150 "base_bdevs_list": [ 00:19:39.150 { 00:19:39.150 "name": "pt1", 00:19:39.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.150 "is_configured": true, 00:19:39.150 "data_offset": 2048, 00:19:39.150 "data_size": 63488 00:19:39.150 }, 00:19:39.150 { 00:19:39.151 "name": "pt2", 00:19:39.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.151 "is_configured": true, 00:19:39.151 "data_offset": 2048, 00:19:39.151 "data_size": 63488 00:19:39.151 }, 00:19:39.151 { 00:19:39.151 "name": "pt3", 00:19:39.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:39.151 "is_configured": true, 00:19:39.151 "data_offset": 2048, 00:19:39.151 "data_size": 63488 00:19:39.151 }, 00:19:39.151 { 00:19:39.151 "name": "pt4", 00:19:39.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:39.151 "is_configured": true, 00:19:39.151 "data_offset": 2048, 00:19:39.151 "data_size": 63488 00:19:39.151 } 00:19:39.151 ] 00:19:39.151 } 00:19:39.151 } 00:19:39.151 }' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:39.151 pt2 00:19:39.151 pt3 00:19:39.151 pt4' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.151 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.411 [2024-11-26 06:29:23.377514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.411 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c424764b-d3c4-4129-b2ab-b219fbbad482 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c424764b-d3c4-4129-b2ab-b219fbbad482 ']' 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.412 [2024-11-26 06:29:23.421218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.412 [2024-11-26 06:29:23.421252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.412 [2024-11-26 06:29:23.421355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.412 [2024-11-26 06:29:23.421461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.412 [2024-11-26 06:29:23.421480] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:39.412 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.687 [2024-11-26 06:29:23.592950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:39.687 [2024-11-26 06:29:23.595515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:39.687 [2024-11-26 06:29:23.595622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:39.687 [2024-11-26 06:29:23.595685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:39.687 [2024-11-26 06:29:23.595782] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:39.687 [2024-11-26 06:29:23.595903] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:39.687 [2024-11-26 06:29:23.595931] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:39.687 [2024-11-26 06:29:23.595954] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:39.687 [2024-11-26 06:29:23.595970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.687 [2024-11-26 06:29:23.595982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:39.687 request: 00:19:39.687 { 00:19:39.687 "name": "raid_bdev1", 00:19:39.687 "raid_level": "raid5f", 00:19:39.687 "base_bdevs": [ 00:19:39.687 "malloc1", 00:19:39.687 "malloc2", 00:19:39.687 "malloc3", 00:19:39.687 "malloc4" 00:19:39.687 ], 00:19:39.687 "strip_size_kb": 64, 00:19:39.687 "superblock": false, 00:19:39.687 "method": "bdev_raid_create", 00:19:39.687 "req_id": 1 00:19:39.687 } 00:19:39.687 Got JSON-RPC error response 00:19:39.687 response: 00:19:39.687 { 00:19:39.687 "code": -17, 00:19:39.687 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:39.687 } 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.687 [2024-11-26 06:29:23.656816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.687 [2024-11-26 06:29:23.656942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.687 [2024-11-26 06:29:23.656984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:39.687 [2024-11-26 06:29:23.657031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.687 [2024-11-26 06:29:23.660055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.687 [2024-11-26 06:29:23.660166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.687 [2024-11-26 06:29:23.660305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:39.687 [2024-11-26 06:29:23.660471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.687 pt1 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.687 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.687 "name": "raid_bdev1", 00:19:39.687 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:39.687 "strip_size_kb": 64, 00:19:39.687 "state": "configuring", 00:19:39.687 "raid_level": "raid5f", 00:19:39.687 "superblock": true, 00:19:39.687 "num_base_bdevs": 4, 00:19:39.687 "num_base_bdevs_discovered": 1, 00:19:39.688 "num_base_bdevs_operational": 4, 00:19:39.688 "base_bdevs_list": [ 00:19:39.688 { 00:19:39.688 "name": "pt1", 00:19:39.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.688 "is_configured": true, 00:19:39.688 "data_offset": 2048, 00:19:39.688 "data_size": 63488 00:19:39.688 }, 00:19:39.688 { 00:19:39.688 "name": null, 00:19:39.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.688 "is_configured": false, 00:19:39.688 "data_offset": 2048, 00:19:39.688 "data_size": 63488 00:19:39.688 }, 00:19:39.688 { 00:19:39.688 "name": null, 00:19:39.688 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:39.688 "is_configured": false, 00:19:39.688 "data_offset": 2048, 00:19:39.688 "data_size": 63488 00:19:39.688 }, 00:19:39.688 { 00:19:39.688 "name": null, 00:19:39.688 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:39.688 "is_configured": false, 00:19:39.688 "data_offset": 2048, 00:19:39.688 "data_size": 63488 00:19:39.688 } 00:19:39.688 ] 00:19:39.688 }' 00:19:39.688 06:29:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.688 06:29:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.257 [2024-11-26 06:29:24.092145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.257 [2024-11-26 06:29:24.092236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.257 [2024-11-26 06:29:24.092260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:40.257 [2024-11-26 06:29:24.092290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.257 [2024-11-26 06:29:24.092878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.257 [2024-11-26 06:29:24.092903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.257 [2024-11-26 06:29:24.093003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.257 [2024-11-26 06:29:24.093034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.257 pt2 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.257 [2024-11-26 06:29:24.104131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.257 "name": "raid_bdev1", 00:19:40.257 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:40.257 "strip_size_kb": 64, 00:19:40.257 "state": "configuring", 00:19:40.257 "raid_level": "raid5f", 00:19:40.257 "superblock": true, 00:19:40.257 "num_base_bdevs": 4, 00:19:40.257 "num_base_bdevs_discovered": 1, 00:19:40.257 "num_base_bdevs_operational": 4, 00:19:40.257 "base_bdevs_list": [ 00:19:40.257 { 00:19:40.257 "name": "pt1", 00:19:40.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.257 "is_configured": true, 00:19:40.257 "data_offset": 2048, 00:19:40.257 "data_size": 63488 00:19:40.257 }, 00:19:40.257 { 00:19:40.257 "name": null, 00:19:40.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.257 "is_configured": false, 00:19:40.257 "data_offset": 0, 00:19:40.257 "data_size": 63488 00:19:40.257 }, 00:19:40.257 { 00:19:40.257 "name": null, 00:19:40.257 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.257 "is_configured": false, 00:19:40.257 "data_offset": 2048, 00:19:40.257 "data_size": 63488 00:19:40.257 }, 00:19:40.257 { 00:19:40.257 "name": null, 00:19:40.257 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:40.257 "is_configured": false, 00:19:40.257 "data_offset": 2048, 00:19:40.257 "data_size": 63488 00:19:40.257 } 00:19:40.257 ] 00:19:40.257 }' 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.257 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.518 [2024-11-26 06:29:24.507402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.518 [2024-11-26 06:29:24.507553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.518 [2024-11-26 06:29:24.507595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:40.518 [2024-11-26 06:29:24.507639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.518 [2024-11-26 06:29:24.508262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.518 [2024-11-26 06:29:24.508326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.518 [2024-11-26 06:29:24.508506] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:40.518 [2024-11-26 06:29:24.508591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.518 pt2 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.518 [2024-11-26 06:29:24.519336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:40.518 [2024-11-26 06:29:24.519438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.518 [2024-11-26 06:29:24.519473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:40.518 [2024-11-26 06:29:24.519500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.518 [2024-11-26 06:29:24.519951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.518 [2024-11-26 06:29:24.520005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:40.518 [2024-11-26 06:29:24.520122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:40.518 [2024-11-26 06:29:24.520171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:40.518 pt3 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.518 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.518 [2024-11-26 06:29:24.531278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:40.518 [2024-11-26 06:29:24.531325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.518 [2024-11-26 06:29:24.531344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:40.518 [2024-11-26 06:29:24.531352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.518 [2024-11-26 06:29:24.531716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.518 [2024-11-26 06:29:24.531731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:40.518 [2024-11-26 06:29:24.531789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:40.518 [2024-11-26 06:29:24.531805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:40.518 [2024-11-26 06:29:24.531949] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:40.518 [2024-11-26 06:29:24.531958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:40.519 [2024-11-26 06:29:24.532221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:40.519 [2024-11-26 06:29:24.539312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:40.519 [2024-11-26 06:29:24.539336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:40.519 [2024-11-26 06:29:24.539520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.519 pt4 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.519 "name": "raid_bdev1", 00:19:40.519 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:40.519 "strip_size_kb": 64, 00:19:40.519 "state": "online", 00:19:40.519 "raid_level": "raid5f", 00:19:40.519 "superblock": true, 00:19:40.519 "num_base_bdevs": 4, 00:19:40.519 "num_base_bdevs_discovered": 4, 00:19:40.519 "num_base_bdevs_operational": 4, 00:19:40.519 "base_bdevs_list": [ 00:19:40.519 { 00:19:40.519 "name": "pt1", 00:19:40.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.519 "is_configured": true, 00:19:40.519 "data_offset": 2048, 00:19:40.519 "data_size": 63488 00:19:40.519 }, 00:19:40.519 { 00:19:40.519 "name": "pt2", 00:19:40.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.519 "is_configured": true, 00:19:40.519 "data_offset": 2048, 00:19:40.519 "data_size": 63488 00:19:40.519 }, 00:19:40.519 { 00:19:40.519 "name": "pt3", 00:19:40.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:40.519 "is_configured": true, 00:19:40.519 "data_offset": 2048, 00:19:40.519 "data_size": 63488 00:19:40.519 }, 00:19:40.519 { 00:19:40.519 "name": "pt4", 00:19:40.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:40.519 "is_configured": true, 00:19:40.519 "data_offset": 2048, 00:19:40.519 "data_size": 63488 00:19:40.519 } 00:19:40.519 ] 00:19:40.519 }' 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.519 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.089 06:29:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.089 [2024-11-26 06:29:24.993109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.089 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.089 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.089 "name": "raid_bdev1", 00:19:41.090 "aliases": [ 00:19:41.090 "c424764b-d3c4-4129-b2ab-b219fbbad482" 00:19:41.090 ], 00:19:41.090 "product_name": "Raid Volume", 00:19:41.090 "block_size": 512, 00:19:41.090 "num_blocks": 190464, 00:19:41.090 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:41.090 "assigned_rate_limits": { 00:19:41.090 "rw_ios_per_sec": 0, 00:19:41.090 "rw_mbytes_per_sec": 0, 00:19:41.090 "r_mbytes_per_sec": 0, 00:19:41.090 "w_mbytes_per_sec": 0 00:19:41.090 }, 00:19:41.090 "claimed": false, 00:19:41.090 "zoned": false, 00:19:41.090 "supported_io_types": { 00:19:41.090 "read": true, 00:19:41.090 "write": true, 00:19:41.090 "unmap": false, 00:19:41.090 "flush": false, 00:19:41.090 "reset": true, 00:19:41.090 "nvme_admin": false, 00:19:41.090 "nvme_io": false, 00:19:41.090 "nvme_io_md": false, 00:19:41.090 "write_zeroes": true, 00:19:41.090 "zcopy": false, 00:19:41.090 "get_zone_info": false, 00:19:41.090 "zone_management": false, 00:19:41.090 "zone_append": false, 00:19:41.090 "compare": false, 00:19:41.090 "compare_and_write": false, 00:19:41.090 "abort": false, 00:19:41.090 "seek_hole": false, 00:19:41.090 "seek_data": false, 00:19:41.090 "copy": false, 00:19:41.090 "nvme_iov_md": false 00:19:41.090 }, 00:19:41.090 "driver_specific": { 00:19:41.090 "raid": { 00:19:41.090 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:41.090 "strip_size_kb": 64, 00:19:41.090 "state": "online", 00:19:41.090 "raid_level": "raid5f", 00:19:41.090 "superblock": true, 00:19:41.090 "num_base_bdevs": 4, 00:19:41.090 "num_base_bdevs_discovered": 4, 00:19:41.090 "num_base_bdevs_operational": 4, 00:19:41.090 "base_bdevs_list": [ 00:19:41.090 { 00:19:41.090 "name": "pt1", 00:19:41.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.090 "is_configured": true, 00:19:41.090 "data_offset": 2048, 00:19:41.090 "data_size": 63488 00:19:41.090 }, 00:19:41.090 { 00:19:41.090 "name": "pt2", 00:19:41.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.090 "is_configured": true, 00:19:41.090 "data_offset": 2048, 00:19:41.090 "data_size": 63488 00:19:41.090 }, 00:19:41.090 { 00:19:41.090 "name": "pt3", 00:19:41.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.090 "is_configured": true, 00:19:41.090 "data_offset": 2048, 00:19:41.090 "data_size": 63488 00:19:41.090 }, 00:19:41.090 { 00:19:41.090 "name": "pt4", 00:19:41.090 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:41.090 "is_configured": true, 00:19:41.090 "data_offset": 2048, 00:19:41.090 "data_size": 63488 00:19:41.090 } 00:19:41.090 ] 00:19:41.090 } 00:19:41.090 } 00:19:41.090 }' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:41.090 pt2 00:19:41.090 pt3 00:19:41.090 pt4' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.090 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:41.350 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.351 [2024-11-26 06:29:25.340442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c424764b-d3c4-4129-b2ab-b219fbbad482 '!=' c424764b-d3c4-4129-b2ab-b219fbbad482 ']' 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.351 [2024-11-26 06:29:25.388219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.351 "name": "raid_bdev1", 00:19:41.351 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:41.351 "strip_size_kb": 64, 00:19:41.351 "state": "online", 00:19:41.351 "raid_level": "raid5f", 00:19:41.351 "superblock": true, 00:19:41.351 "num_base_bdevs": 4, 00:19:41.351 "num_base_bdevs_discovered": 3, 00:19:41.351 "num_base_bdevs_operational": 3, 00:19:41.351 "base_bdevs_list": [ 00:19:41.351 { 00:19:41.351 "name": null, 00:19:41.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.351 "is_configured": false, 00:19:41.351 "data_offset": 0, 00:19:41.351 "data_size": 63488 00:19:41.351 }, 00:19:41.351 { 00:19:41.351 "name": "pt2", 00:19:41.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.351 "is_configured": true, 00:19:41.351 "data_offset": 2048, 00:19:41.351 "data_size": 63488 00:19:41.351 }, 00:19:41.351 { 00:19:41.351 "name": "pt3", 00:19:41.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.351 "is_configured": true, 00:19:41.351 "data_offset": 2048, 00:19:41.351 "data_size": 63488 00:19:41.351 }, 00:19:41.351 { 00:19:41.351 "name": "pt4", 00:19:41.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:41.351 "is_configured": true, 00:19:41.351 "data_offset": 2048, 00:19:41.351 "data_size": 63488 00:19:41.351 } 00:19:41.351 ] 00:19:41.351 }' 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.351 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 [2024-11-26 06:29:25.851385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.922 [2024-11-26 06:29:25.851476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.922 [2024-11-26 06:29:25.851598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.922 [2024-11-26 06:29:25.851726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.922 [2024-11-26 06:29:25.851773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 [2024-11-26 06:29:25.951204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.922 [2024-11-26 06:29:25.951264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.922 [2024-11-26 06:29:25.951301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:41.922 [2024-11-26 06:29:25.951312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.922 [2024-11-26 06:29:25.953965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.922 pt2 00:19:41.922 [2024-11-26 06:29:25.954048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.922 [2024-11-26 06:29:25.954170] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:41.922 [2024-11-26 06:29:25.954221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.922 06:29:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.922 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.922 "name": "raid_bdev1", 00:19:41.922 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:41.922 "strip_size_kb": 64, 00:19:41.922 "state": "configuring", 00:19:41.922 "raid_level": "raid5f", 00:19:41.922 "superblock": true, 00:19:41.922 "num_base_bdevs": 4, 00:19:41.922 "num_base_bdevs_discovered": 1, 00:19:41.922 "num_base_bdevs_operational": 3, 00:19:41.922 "base_bdevs_list": [ 00:19:41.922 { 00:19:41.922 "name": null, 00:19:41.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.922 "is_configured": false, 00:19:41.922 "data_offset": 2048, 00:19:41.922 "data_size": 63488 00:19:41.922 }, 00:19:41.923 { 00:19:41.923 "name": "pt2", 00:19:41.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.923 "is_configured": true, 00:19:41.923 "data_offset": 2048, 00:19:41.923 "data_size": 63488 00:19:41.923 }, 00:19:41.923 { 00:19:41.923 "name": null, 00:19:41.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.923 "is_configured": false, 00:19:41.923 "data_offset": 2048, 00:19:41.923 "data_size": 63488 00:19:41.923 }, 00:19:41.923 { 00:19:41.923 "name": null, 00:19:41.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:41.923 "is_configured": false, 00:19:41.923 "data_offset": 2048, 00:19:41.923 "data_size": 63488 00:19:41.923 } 00:19:41.923 ] 00:19:41.923 }' 00:19:41.923 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.923 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.491 [2024-11-26 06:29:26.434438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:42.491 [2024-11-26 06:29:26.434574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.491 [2024-11-26 06:29:26.434619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:42.491 [2024-11-26 06:29:26.434651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.491 [2024-11-26 06:29:26.435255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.491 [2024-11-26 06:29:26.435428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:42.491 [2024-11-26 06:29:26.435636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:42.491 [2024-11-26 06:29:26.435710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:42.491 pt3 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:42.491 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.492 "name": "raid_bdev1", 00:19:42.492 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:42.492 "strip_size_kb": 64, 00:19:42.492 "state": "configuring", 00:19:42.492 "raid_level": "raid5f", 00:19:42.492 "superblock": true, 00:19:42.492 "num_base_bdevs": 4, 00:19:42.492 "num_base_bdevs_discovered": 2, 00:19:42.492 "num_base_bdevs_operational": 3, 00:19:42.492 "base_bdevs_list": [ 00:19:42.492 { 00:19:42.492 "name": null, 00:19:42.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.492 "is_configured": false, 00:19:42.492 "data_offset": 2048, 00:19:42.492 "data_size": 63488 00:19:42.492 }, 00:19:42.492 { 00:19:42.492 "name": "pt2", 00:19:42.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.492 "is_configured": true, 00:19:42.492 "data_offset": 2048, 00:19:42.492 "data_size": 63488 00:19:42.492 }, 00:19:42.492 { 00:19:42.492 "name": "pt3", 00:19:42.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:42.492 "is_configured": true, 00:19:42.492 "data_offset": 2048, 00:19:42.492 "data_size": 63488 00:19:42.492 }, 00:19:42.492 { 00:19:42.492 "name": null, 00:19:42.492 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:42.492 "is_configured": false, 00:19:42.492 "data_offset": 2048, 00:19:42.492 "data_size": 63488 00:19:42.492 } 00:19:42.492 ] 00:19:42.492 }' 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.492 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.751 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:42.751 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:42.751 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:42.751 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:42.751 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.751 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.010 [2024-11-26 06:29:26.889697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:43.010 [2024-11-26 06:29:26.889779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.010 [2024-11-26 06:29:26.889805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:43.010 [2024-11-26 06:29:26.889816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.010 [2024-11-26 06:29:26.890461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.010 [2024-11-26 06:29:26.890539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:43.010 [2024-11-26 06:29:26.890659] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:43.010 [2024-11-26 06:29:26.890691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:43.010 [2024-11-26 06:29:26.890863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:43.010 [2024-11-26 06:29:26.890873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:43.010 [2024-11-26 06:29:26.891194] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:43.010 [2024-11-26 06:29:26.898674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:43.010 [2024-11-26 06:29:26.898704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:43.010 [2024-11-26 06:29:26.899050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.010 pt4 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.011 "name": "raid_bdev1", 00:19:43.011 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:43.011 "strip_size_kb": 64, 00:19:43.011 "state": "online", 00:19:43.011 "raid_level": "raid5f", 00:19:43.011 "superblock": true, 00:19:43.011 "num_base_bdevs": 4, 00:19:43.011 "num_base_bdevs_discovered": 3, 00:19:43.011 "num_base_bdevs_operational": 3, 00:19:43.011 "base_bdevs_list": [ 00:19:43.011 { 00:19:43.011 "name": null, 00:19:43.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.011 "is_configured": false, 00:19:43.011 "data_offset": 2048, 00:19:43.011 "data_size": 63488 00:19:43.011 }, 00:19:43.011 { 00:19:43.011 "name": "pt2", 00:19:43.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.011 "is_configured": true, 00:19:43.011 "data_offset": 2048, 00:19:43.011 "data_size": 63488 00:19:43.011 }, 00:19:43.011 { 00:19:43.011 "name": "pt3", 00:19:43.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.011 "is_configured": true, 00:19:43.011 "data_offset": 2048, 00:19:43.011 "data_size": 63488 00:19:43.011 }, 00:19:43.011 { 00:19:43.011 "name": "pt4", 00:19:43.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:43.011 "is_configured": true, 00:19:43.011 "data_offset": 2048, 00:19:43.011 "data_size": 63488 00:19:43.011 } 00:19:43.011 ] 00:19:43.011 }' 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.011 06:29:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.271 [2024-11-26 06:29:27.341224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.271 [2024-11-26 06:29:27.341258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.271 [2024-11-26 06:29:27.341360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.271 [2024-11-26 06:29:27.341452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.271 [2024-11-26 06:29:27.341467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.271 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.531 [2024-11-26 06:29:27.417175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:43.531 [2024-11-26 06:29:27.417301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.531 [2024-11-26 06:29:27.417377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:43.531 [2024-11-26 06:29:27.417430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.531 [2024-11-26 06:29:27.420368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.531 [2024-11-26 06:29:27.420458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:43.531 [2024-11-26 06:29:27.420600] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:43.531 [2024-11-26 06:29:27.420714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:43.531 [2024-11-26 06:29:27.420948] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:43.531 [2024-11-26 06:29:27.421013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.531 [2024-11-26 06:29:27.421084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:43.531 [2024-11-26 06:29:27.421222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:43.531 [2024-11-26 06:29:27.421412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:43.531 pt1 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.531 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.532 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.532 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.532 "name": "raid_bdev1", 00:19:43.532 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:43.532 "strip_size_kb": 64, 00:19:43.532 "state": "configuring", 00:19:43.532 "raid_level": "raid5f", 00:19:43.532 "superblock": true, 00:19:43.532 "num_base_bdevs": 4, 00:19:43.532 "num_base_bdevs_discovered": 2, 00:19:43.532 "num_base_bdevs_operational": 3, 00:19:43.532 "base_bdevs_list": [ 00:19:43.532 { 00:19:43.532 "name": null, 00:19:43.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.532 "is_configured": false, 00:19:43.532 "data_offset": 2048, 00:19:43.532 "data_size": 63488 00:19:43.532 }, 00:19:43.532 { 00:19:43.532 "name": "pt2", 00:19:43.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.532 "is_configured": true, 00:19:43.532 "data_offset": 2048, 00:19:43.532 "data_size": 63488 00:19:43.532 }, 00:19:43.532 { 00:19:43.532 "name": "pt3", 00:19:43.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.532 "is_configured": true, 00:19:43.532 "data_offset": 2048, 00:19:43.532 "data_size": 63488 00:19:43.532 }, 00:19:43.532 { 00:19:43.532 "name": null, 00:19:43.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:43.532 "is_configured": false, 00:19:43.532 "data_offset": 2048, 00:19:43.532 "data_size": 63488 00:19:43.532 } 00:19:43.532 ] 00:19:43.532 }' 00:19:43.532 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.532 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.792 [2024-11-26 06:29:27.868776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:43.792 [2024-11-26 06:29:27.868900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.792 [2024-11-26 06:29:27.868970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:43.792 [2024-11-26 06:29:27.869044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.792 [2024-11-26 06:29:27.869735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.792 [2024-11-26 06:29:27.869803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:43.792 [2024-11-26 06:29:27.869967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:43.792 [2024-11-26 06:29:27.870043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:43.792 [2024-11-26 06:29:27.870322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:43.792 [2024-11-26 06:29:27.870370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:43.792 [2024-11-26 06:29:27.870745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:43.792 [2024-11-26 06:29:27.879975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:43.792 [2024-11-26 06:29:27.880045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:43.792 [2024-11-26 06:29:27.880487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.792 pt4 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.792 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.052 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.052 "name": "raid_bdev1", 00:19:44.052 "uuid": "c424764b-d3c4-4129-b2ab-b219fbbad482", 00:19:44.052 "strip_size_kb": 64, 00:19:44.052 "state": "online", 00:19:44.052 "raid_level": "raid5f", 00:19:44.052 "superblock": true, 00:19:44.052 "num_base_bdevs": 4, 00:19:44.052 "num_base_bdevs_discovered": 3, 00:19:44.052 "num_base_bdevs_operational": 3, 00:19:44.052 "base_bdevs_list": [ 00:19:44.052 { 00:19:44.052 "name": null, 00:19:44.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.052 "is_configured": false, 00:19:44.052 "data_offset": 2048, 00:19:44.052 "data_size": 63488 00:19:44.052 }, 00:19:44.052 { 00:19:44.052 "name": "pt2", 00:19:44.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:44.052 "is_configured": true, 00:19:44.052 "data_offset": 2048, 00:19:44.052 "data_size": 63488 00:19:44.052 }, 00:19:44.052 { 00:19:44.052 "name": "pt3", 00:19:44.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:44.052 "is_configured": true, 00:19:44.052 "data_offset": 2048, 00:19:44.052 "data_size": 63488 00:19:44.052 }, 00:19:44.052 { 00:19:44.052 "name": "pt4", 00:19:44.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:44.052 "is_configured": true, 00:19:44.052 "data_offset": 2048, 00:19:44.052 "data_size": 63488 00:19:44.052 } 00:19:44.052 ] 00:19:44.052 }' 00:19:44.052 06:29:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.052 06:29:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.312 [2024-11-26 06:29:28.379732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c424764b-d3c4-4129-b2ab-b219fbbad482 '!=' c424764b-d3c4-4129-b2ab-b219fbbad482 ']' 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84687 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84687 ']' 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84687 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.312 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84687 00:19:44.572 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.572 killing process with pid 84687 00:19:44.572 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.572 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84687' 00:19:44.572 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84687 00:19:44.572 [2024-11-26 06:29:28.464458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:44.572 [2024-11-26 06:29:28.464596] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.572 06:29:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84687 00:19:44.572 [2024-11-26 06:29:28.464692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.572 [2024-11-26 06:29:28.464708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:44.832 [2024-11-26 06:29:28.918642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.211 ************************************ 00:19:46.211 END TEST raid5f_superblock_test 00:19:46.211 ************************************ 00:19:46.211 06:29:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:46.211 00:19:46.211 real 0m8.761s 00:19:46.211 user 0m13.489s 00:19:46.211 sys 0m1.717s 00:19:46.211 06:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.211 06:29:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.211 06:29:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:46.211 06:29:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:46.211 06:29:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:46.211 06:29:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.211 06:29:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.211 ************************************ 00:19:46.211 START TEST raid5f_rebuild_test 00:19:46.211 ************************************ 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85173 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85173 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85173 ']' 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.211 06:29:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.471 [2024-11-26 06:29:30.355165] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:19:46.471 [2024-11-26 06:29:30.355433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85173 ] 00:19:46.471 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.471 Zero copy mechanism will not be used. 00:19:46.471 [2024-11-26 06:29:30.535768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.730 [2024-11-26 06:29:30.685568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.989 [2024-11-26 06:29:30.942538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.989 [2024-11-26 06:29:30.942743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.250 BaseBdev1_malloc 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.250 [2024-11-26 06:29:31.280915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:47.250 [2024-11-26 06:29:31.281079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.250 [2024-11-26 06:29:31.281137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:47.250 [2024-11-26 06:29:31.281204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.250 [2024-11-26 06:29:31.283882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.250 [2024-11-26 06:29:31.283923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:47.250 BaseBdev1 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.250 BaseBdev2_malloc 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.250 [2024-11-26 06:29:31.345841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:47.250 [2024-11-26 06:29:31.345922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.250 [2024-11-26 06:29:31.345944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:47.250 [2024-11-26 06:29:31.345957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.250 [2024-11-26 06:29:31.348583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.250 [2024-11-26 06:29:31.348624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:47.250 BaseBdev2 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.250 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.510 BaseBdev3_malloc 00:19:47.510 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.510 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:47.510 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.510 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.510 [2024-11-26 06:29:31.425326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:47.511 [2024-11-26 06:29:31.425484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.511 [2024-11-26 06:29:31.425585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:47.511 [2024-11-26 06:29:31.425633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.511 [2024-11-26 06:29:31.428411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.511 [2024-11-26 06:29:31.428527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:47.511 BaseBdev3 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 BaseBdev4_malloc 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 [2024-11-26 06:29:31.488952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:47.511 [2024-11-26 06:29:31.489087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.511 [2024-11-26 06:29:31.489134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:47.511 [2024-11-26 06:29:31.489174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.511 [2024-11-26 06:29:31.491867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.511 [2024-11-26 06:29:31.491945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:47.511 BaseBdev4 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 spare_malloc 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 spare_delay 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 [2024-11-26 06:29:31.565124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:47.511 [2024-11-26 06:29:31.565259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.511 [2024-11-26 06:29:31.565307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:47.511 [2024-11-26 06:29:31.565368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.511 [2024-11-26 06:29:31.568019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.511 [2024-11-26 06:29:31.568116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:47.511 spare 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 [2024-11-26 06:29:31.577221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.511 [2024-11-26 06:29:31.579578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.511 [2024-11-26 06:29:31.579708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.511 [2024-11-26 06:29:31.579821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:47.511 [2024-11-26 06:29:31.579992] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:47.511 [2024-11-26 06:29:31.580041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:47.511 [2024-11-26 06:29:31.580452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:47.511 [2024-11-26 06:29:31.589022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:47.511 [2024-11-26 06:29:31.589109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:47.511 [2024-11-26 06:29:31.589489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.511 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.771 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.771 "name": "raid_bdev1", 00:19:47.771 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:47.771 "strip_size_kb": 64, 00:19:47.771 "state": "online", 00:19:47.771 "raid_level": "raid5f", 00:19:47.771 "superblock": false, 00:19:47.771 "num_base_bdevs": 4, 00:19:47.771 "num_base_bdevs_discovered": 4, 00:19:47.771 "num_base_bdevs_operational": 4, 00:19:47.771 "base_bdevs_list": [ 00:19:47.771 { 00:19:47.771 "name": "BaseBdev1", 00:19:47.771 "uuid": "99568720-0480-510c-b194-63b2238a78ca", 00:19:47.771 "is_configured": true, 00:19:47.771 "data_offset": 0, 00:19:47.771 "data_size": 65536 00:19:47.771 }, 00:19:47.771 { 00:19:47.771 "name": "BaseBdev2", 00:19:47.771 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:47.771 "is_configured": true, 00:19:47.771 "data_offset": 0, 00:19:47.771 "data_size": 65536 00:19:47.771 }, 00:19:47.771 { 00:19:47.771 "name": "BaseBdev3", 00:19:47.771 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:47.771 "is_configured": true, 00:19:47.771 "data_offset": 0, 00:19:47.771 "data_size": 65536 00:19:47.771 }, 00:19:47.771 { 00:19:47.771 "name": "BaseBdev4", 00:19:47.771 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:47.771 "is_configured": true, 00:19:47.771 "data_offset": 0, 00:19:47.771 "data_size": 65536 00:19:47.771 } 00:19:47.771 ] 00:19:47.771 }' 00:19:47.771 06:29:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.771 06:29:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.031 [2024-11-26 06:29:32.051391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.031 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:48.291 [2024-11-26 06:29:32.322663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:48.291 /dev/nbd0 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.291 1+0 records in 00:19:48.291 1+0 records out 00:19:48.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573647 s, 7.1 MB/s 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:48.291 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:48.858 512+0 records in 00:19:48.858 512+0 records out 00:19:48.858 100663296 bytes (101 MB, 96 MiB) copied, 0.541554 s, 186 MB/s 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.858 06:29:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.117 [2024-11-26 06:29:33.178636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.117 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.118 [2024-11-26 06:29:33.194605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.118 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.377 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.377 "name": "raid_bdev1", 00:19:49.377 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:49.377 "strip_size_kb": 64, 00:19:49.377 "state": "online", 00:19:49.377 "raid_level": "raid5f", 00:19:49.377 "superblock": false, 00:19:49.377 "num_base_bdevs": 4, 00:19:49.377 "num_base_bdevs_discovered": 3, 00:19:49.377 "num_base_bdevs_operational": 3, 00:19:49.377 "base_bdevs_list": [ 00:19:49.377 { 00:19:49.377 "name": null, 00:19:49.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.377 "is_configured": false, 00:19:49.377 "data_offset": 0, 00:19:49.377 "data_size": 65536 00:19:49.377 }, 00:19:49.377 { 00:19:49.377 "name": "BaseBdev2", 00:19:49.377 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:49.377 "is_configured": true, 00:19:49.377 "data_offset": 0, 00:19:49.377 "data_size": 65536 00:19:49.377 }, 00:19:49.377 { 00:19:49.377 "name": "BaseBdev3", 00:19:49.377 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:49.377 "is_configured": true, 00:19:49.377 "data_offset": 0, 00:19:49.377 "data_size": 65536 00:19:49.377 }, 00:19:49.377 { 00:19:49.377 "name": "BaseBdev4", 00:19:49.377 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:49.377 "is_configured": true, 00:19:49.377 "data_offset": 0, 00:19:49.377 "data_size": 65536 00:19:49.377 } 00:19:49.377 ] 00:19:49.377 }' 00:19:49.377 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.377 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.637 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:49.637 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.637 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.637 [2024-11-26 06:29:33.597941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.637 [2024-11-26 06:29:33.614094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:49.637 06:29:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.637 06:29:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:49.637 [2024-11-26 06:29:33.623854] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.576 "name": "raid_bdev1", 00:19:50.576 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:50.576 "strip_size_kb": 64, 00:19:50.576 "state": "online", 00:19:50.576 "raid_level": "raid5f", 00:19:50.576 "superblock": false, 00:19:50.576 "num_base_bdevs": 4, 00:19:50.576 "num_base_bdevs_discovered": 4, 00:19:50.576 "num_base_bdevs_operational": 4, 00:19:50.576 "process": { 00:19:50.576 "type": "rebuild", 00:19:50.576 "target": "spare", 00:19:50.576 "progress": { 00:19:50.576 "blocks": 19200, 00:19:50.576 "percent": 9 00:19:50.576 } 00:19:50.576 }, 00:19:50.576 "base_bdevs_list": [ 00:19:50.576 { 00:19:50.576 "name": "spare", 00:19:50.576 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:50.576 "is_configured": true, 00:19:50.576 "data_offset": 0, 00:19:50.576 "data_size": 65536 00:19:50.576 }, 00:19:50.576 { 00:19:50.576 "name": "BaseBdev2", 00:19:50.576 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:50.576 "is_configured": true, 00:19:50.576 "data_offset": 0, 00:19:50.576 "data_size": 65536 00:19:50.576 }, 00:19:50.576 { 00:19:50.576 "name": "BaseBdev3", 00:19:50.576 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:50.576 "is_configured": true, 00:19:50.576 "data_offset": 0, 00:19:50.576 "data_size": 65536 00:19:50.576 }, 00:19:50.576 { 00:19:50.576 "name": "BaseBdev4", 00:19:50.576 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:50.576 "is_configured": true, 00:19:50.576 "data_offset": 0, 00:19:50.576 "data_size": 65536 00:19:50.576 } 00:19:50.576 ] 00:19:50.576 }' 00:19:50.576 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.835 [2024-11-26 06:29:34.774729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.835 [2024-11-26 06:29:34.833474] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:50.835 [2024-11-26 06:29:34.833633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.835 [2024-11-26 06:29:34.833712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.835 [2024-11-26 06:29:34.833754] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.835 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.835 "name": "raid_bdev1", 00:19:50.835 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:50.835 "strip_size_kb": 64, 00:19:50.835 "state": "online", 00:19:50.835 "raid_level": "raid5f", 00:19:50.835 "superblock": false, 00:19:50.835 "num_base_bdevs": 4, 00:19:50.835 "num_base_bdevs_discovered": 3, 00:19:50.835 "num_base_bdevs_operational": 3, 00:19:50.835 "base_bdevs_list": [ 00:19:50.835 { 00:19:50.835 "name": null, 00:19:50.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.835 "is_configured": false, 00:19:50.835 "data_offset": 0, 00:19:50.835 "data_size": 65536 00:19:50.835 }, 00:19:50.835 { 00:19:50.835 "name": "BaseBdev2", 00:19:50.835 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:50.835 "is_configured": true, 00:19:50.835 "data_offset": 0, 00:19:50.836 "data_size": 65536 00:19:50.836 }, 00:19:50.836 { 00:19:50.836 "name": "BaseBdev3", 00:19:50.836 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:50.836 "is_configured": true, 00:19:50.836 "data_offset": 0, 00:19:50.836 "data_size": 65536 00:19:50.836 }, 00:19:50.836 { 00:19:50.836 "name": "BaseBdev4", 00:19:50.836 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:50.836 "is_configured": true, 00:19:50.836 "data_offset": 0, 00:19:50.836 "data_size": 65536 00:19:50.836 } 00:19:50.836 ] 00:19:50.836 }' 00:19:50.836 06:29:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.836 06:29:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.404 "name": "raid_bdev1", 00:19:51.404 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:51.404 "strip_size_kb": 64, 00:19:51.404 "state": "online", 00:19:51.404 "raid_level": "raid5f", 00:19:51.404 "superblock": false, 00:19:51.404 "num_base_bdevs": 4, 00:19:51.404 "num_base_bdevs_discovered": 3, 00:19:51.404 "num_base_bdevs_operational": 3, 00:19:51.404 "base_bdevs_list": [ 00:19:51.404 { 00:19:51.404 "name": null, 00:19:51.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.404 "is_configured": false, 00:19:51.404 "data_offset": 0, 00:19:51.404 "data_size": 65536 00:19:51.404 }, 00:19:51.404 { 00:19:51.404 "name": "BaseBdev2", 00:19:51.404 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:51.404 "is_configured": true, 00:19:51.404 "data_offset": 0, 00:19:51.404 "data_size": 65536 00:19:51.404 }, 00:19:51.404 { 00:19:51.404 "name": "BaseBdev3", 00:19:51.404 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:51.404 "is_configured": true, 00:19:51.404 "data_offset": 0, 00:19:51.404 "data_size": 65536 00:19:51.404 }, 00:19:51.404 { 00:19:51.404 "name": "BaseBdev4", 00:19:51.404 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:51.404 "is_configured": true, 00:19:51.404 "data_offset": 0, 00:19:51.404 "data_size": 65536 00:19:51.404 } 00:19:51.404 ] 00:19:51.404 }' 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.404 [2024-11-26 06:29:35.485302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.404 [2024-11-26 06:29:35.501399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.404 06:29:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:51.404 [2024-11-26 06:29:35.511285] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.787 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.787 "name": "raid_bdev1", 00:19:52.787 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:52.787 "strip_size_kb": 64, 00:19:52.787 "state": "online", 00:19:52.787 "raid_level": "raid5f", 00:19:52.787 "superblock": false, 00:19:52.787 "num_base_bdevs": 4, 00:19:52.787 "num_base_bdevs_discovered": 4, 00:19:52.787 "num_base_bdevs_operational": 4, 00:19:52.787 "process": { 00:19:52.787 "type": "rebuild", 00:19:52.787 "target": "spare", 00:19:52.787 "progress": { 00:19:52.787 "blocks": 19200, 00:19:52.787 "percent": 9 00:19:52.787 } 00:19:52.787 }, 00:19:52.787 "base_bdevs_list": [ 00:19:52.787 { 00:19:52.787 "name": "spare", 00:19:52.787 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:52.787 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 }, 00:19:52.788 { 00:19:52.788 "name": "BaseBdev2", 00:19:52.788 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 }, 00:19:52.788 { 00:19:52.788 "name": "BaseBdev3", 00:19:52.788 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 }, 00:19:52.788 { 00:19:52.788 "name": "BaseBdev4", 00:19:52.788 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 } 00:19:52.788 ] 00:19:52.788 }' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=649 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.788 "name": "raid_bdev1", 00:19:52.788 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:52.788 "strip_size_kb": 64, 00:19:52.788 "state": "online", 00:19:52.788 "raid_level": "raid5f", 00:19:52.788 "superblock": false, 00:19:52.788 "num_base_bdevs": 4, 00:19:52.788 "num_base_bdevs_discovered": 4, 00:19:52.788 "num_base_bdevs_operational": 4, 00:19:52.788 "process": { 00:19:52.788 "type": "rebuild", 00:19:52.788 "target": "spare", 00:19:52.788 "progress": { 00:19:52.788 "blocks": 21120, 00:19:52.788 "percent": 10 00:19:52.788 } 00:19:52.788 }, 00:19:52.788 "base_bdevs_list": [ 00:19:52.788 { 00:19:52.788 "name": "spare", 00:19:52.788 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 }, 00:19:52.788 { 00:19:52.788 "name": "BaseBdev2", 00:19:52.788 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 }, 00:19:52.788 { 00:19:52.788 "name": "BaseBdev3", 00:19:52.788 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 }, 00:19:52.788 { 00:19:52.788 "name": "BaseBdev4", 00:19:52.788 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:52.788 "is_configured": true, 00:19:52.788 "data_offset": 0, 00:19:52.788 "data_size": 65536 00:19:52.788 } 00:19:52.788 ] 00:19:52.788 }' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.788 06:29:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.726 06:29:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.985 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.985 "name": "raid_bdev1", 00:19:53.985 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:53.985 "strip_size_kb": 64, 00:19:53.985 "state": "online", 00:19:53.985 "raid_level": "raid5f", 00:19:53.985 "superblock": false, 00:19:53.985 "num_base_bdevs": 4, 00:19:53.985 "num_base_bdevs_discovered": 4, 00:19:53.985 "num_base_bdevs_operational": 4, 00:19:53.985 "process": { 00:19:53.985 "type": "rebuild", 00:19:53.985 "target": "spare", 00:19:53.985 "progress": { 00:19:53.985 "blocks": 44160, 00:19:53.985 "percent": 22 00:19:53.985 } 00:19:53.985 }, 00:19:53.985 "base_bdevs_list": [ 00:19:53.985 { 00:19:53.985 "name": "spare", 00:19:53.985 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:53.985 "is_configured": true, 00:19:53.985 "data_offset": 0, 00:19:53.985 "data_size": 65536 00:19:53.985 }, 00:19:53.985 { 00:19:53.985 "name": "BaseBdev2", 00:19:53.985 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:53.985 "is_configured": true, 00:19:53.985 "data_offset": 0, 00:19:53.985 "data_size": 65536 00:19:53.985 }, 00:19:53.985 { 00:19:53.985 "name": "BaseBdev3", 00:19:53.985 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:53.985 "is_configured": true, 00:19:53.985 "data_offset": 0, 00:19:53.985 "data_size": 65536 00:19:53.985 }, 00:19:53.985 { 00:19:53.985 "name": "BaseBdev4", 00:19:53.985 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:53.985 "is_configured": true, 00:19:53.985 "data_offset": 0, 00:19:53.985 "data_size": 65536 00:19:53.985 } 00:19:53.985 ] 00:19:53.985 }' 00:19:53.985 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.985 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.985 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.985 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.985 06:29:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.923 06:29:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.923 06:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.923 "name": "raid_bdev1", 00:19:54.923 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:54.923 "strip_size_kb": 64, 00:19:54.923 "state": "online", 00:19:54.923 "raid_level": "raid5f", 00:19:54.923 "superblock": false, 00:19:54.923 "num_base_bdevs": 4, 00:19:54.923 "num_base_bdevs_discovered": 4, 00:19:54.923 "num_base_bdevs_operational": 4, 00:19:54.923 "process": { 00:19:54.923 "type": "rebuild", 00:19:54.923 "target": "spare", 00:19:54.923 "progress": { 00:19:54.923 "blocks": 65280, 00:19:54.923 "percent": 33 00:19:54.923 } 00:19:54.923 }, 00:19:54.923 "base_bdevs_list": [ 00:19:54.923 { 00:19:54.923 "name": "spare", 00:19:54.923 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:54.923 "is_configured": true, 00:19:54.923 "data_offset": 0, 00:19:54.923 "data_size": 65536 00:19:54.923 }, 00:19:54.923 { 00:19:54.923 "name": "BaseBdev2", 00:19:54.923 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:54.923 "is_configured": true, 00:19:54.923 "data_offset": 0, 00:19:54.923 "data_size": 65536 00:19:54.923 }, 00:19:54.923 { 00:19:54.923 "name": "BaseBdev3", 00:19:54.923 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:54.923 "is_configured": true, 00:19:54.923 "data_offset": 0, 00:19:54.923 "data_size": 65536 00:19:54.923 }, 00:19:54.923 { 00:19:54.923 "name": "BaseBdev4", 00:19:54.923 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:54.923 "is_configured": true, 00:19:54.923 "data_offset": 0, 00:19:54.923 "data_size": 65536 00:19:54.923 } 00:19:54.923 ] 00:19:54.923 }' 00:19:54.923 06:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.182 06:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.182 06:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.182 06:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.182 06:29:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.117 "name": "raid_bdev1", 00:19:56.117 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:56.117 "strip_size_kb": 64, 00:19:56.117 "state": "online", 00:19:56.117 "raid_level": "raid5f", 00:19:56.117 "superblock": false, 00:19:56.117 "num_base_bdevs": 4, 00:19:56.117 "num_base_bdevs_discovered": 4, 00:19:56.117 "num_base_bdevs_operational": 4, 00:19:56.117 "process": { 00:19:56.117 "type": "rebuild", 00:19:56.117 "target": "spare", 00:19:56.117 "progress": { 00:19:56.117 "blocks": 86400, 00:19:56.117 "percent": 43 00:19:56.117 } 00:19:56.117 }, 00:19:56.117 "base_bdevs_list": [ 00:19:56.117 { 00:19:56.117 "name": "spare", 00:19:56.117 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:56.117 "is_configured": true, 00:19:56.117 "data_offset": 0, 00:19:56.117 "data_size": 65536 00:19:56.117 }, 00:19:56.117 { 00:19:56.117 "name": "BaseBdev2", 00:19:56.117 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:56.117 "is_configured": true, 00:19:56.117 "data_offset": 0, 00:19:56.117 "data_size": 65536 00:19:56.117 }, 00:19:56.117 { 00:19:56.117 "name": "BaseBdev3", 00:19:56.117 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:56.117 "is_configured": true, 00:19:56.117 "data_offset": 0, 00:19:56.117 "data_size": 65536 00:19:56.117 }, 00:19:56.117 { 00:19:56.117 "name": "BaseBdev4", 00:19:56.117 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:56.117 "is_configured": true, 00:19:56.117 "data_offset": 0, 00:19:56.117 "data_size": 65536 00:19:56.117 } 00:19:56.117 ] 00:19:56.117 }' 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.117 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.376 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.376 06:29:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.315 "name": "raid_bdev1", 00:19:57.315 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:57.315 "strip_size_kb": 64, 00:19:57.315 "state": "online", 00:19:57.315 "raid_level": "raid5f", 00:19:57.315 "superblock": false, 00:19:57.315 "num_base_bdevs": 4, 00:19:57.315 "num_base_bdevs_discovered": 4, 00:19:57.315 "num_base_bdevs_operational": 4, 00:19:57.315 "process": { 00:19:57.315 "type": "rebuild", 00:19:57.315 "target": "spare", 00:19:57.315 "progress": { 00:19:57.315 "blocks": 109440, 00:19:57.315 "percent": 55 00:19:57.315 } 00:19:57.315 }, 00:19:57.315 "base_bdevs_list": [ 00:19:57.315 { 00:19:57.315 "name": "spare", 00:19:57.315 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:57.315 "is_configured": true, 00:19:57.315 "data_offset": 0, 00:19:57.315 "data_size": 65536 00:19:57.315 }, 00:19:57.315 { 00:19:57.315 "name": "BaseBdev2", 00:19:57.315 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:57.315 "is_configured": true, 00:19:57.315 "data_offset": 0, 00:19:57.315 "data_size": 65536 00:19:57.315 }, 00:19:57.315 { 00:19:57.315 "name": "BaseBdev3", 00:19:57.315 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:57.315 "is_configured": true, 00:19:57.315 "data_offset": 0, 00:19:57.315 "data_size": 65536 00:19:57.315 }, 00:19:57.315 { 00:19:57.315 "name": "BaseBdev4", 00:19:57.315 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:57.315 "is_configured": true, 00:19:57.315 "data_offset": 0, 00:19:57.315 "data_size": 65536 00:19:57.315 } 00:19:57.315 ] 00:19:57.315 }' 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.315 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.575 06:29:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.515 "name": "raid_bdev1", 00:19:58.515 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:58.515 "strip_size_kb": 64, 00:19:58.515 "state": "online", 00:19:58.515 "raid_level": "raid5f", 00:19:58.515 "superblock": false, 00:19:58.515 "num_base_bdevs": 4, 00:19:58.515 "num_base_bdevs_discovered": 4, 00:19:58.515 "num_base_bdevs_operational": 4, 00:19:58.515 "process": { 00:19:58.515 "type": "rebuild", 00:19:58.515 "target": "spare", 00:19:58.515 "progress": { 00:19:58.515 "blocks": 130560, 00:19:58.515 "percent": 66 00:19:58.515 } 00:19:58.515 }, 00:19:58.515 "base_bdevs_list": [ 00:19:58.515 { 00:19:58.515 "name": "spare", 00:19:58.515 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:58.515 "is_configured": true, 00:19:58.515 "data_offset": 0, 00:19:58.515 "data_size": 65536 00:19:58.515 }, 00:19:58.515 { 00:19:58.515 "name": "BaseBdev2", 00:19:58.515 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:58.515 "is_configured": true, 00:19:58.515 "data_offset": 0, 00:19:58.515 "data_size": 65536 00:19:58.515 }, 00:19:58.515 { 00:19:58.515 "name": "BaseBdev3", 00:19:58.515 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:58.515 "is_configured": true, 00:19:58.515 "data_offset": 0, 00:19:58.515 "data_size": 65536 00:19:58.515 }, 00:19:58.515 { 00:19:58.515 "name": "BaseBdev4", 00:19:58.515 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:58.515 "is_configured": true, 00:19:58.515 "data_offset": 0, 00:19:58.515 "data_size": 65536 00:19:58.515 } 00:19:58.515 ] 00:19:58.515 }' 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.515 06:29:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.896 "name": "raid_bdev1", 00:19:59.896 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:19:59.896 "strip_size_kb": 64, 00:19:59.896 "state": "online", 00:19:59.896 "raid_level": "raid5f", 00:19:59.896 "superblock": false, 00:19:59.896 "num_base_bdevs": 4, 00:19:59.896 "num_base_bdevs_discovered": 4, 00:19:59.896 "num_base_bdevs_operational": 4, 00:19:59.896 "process": { 00:19:59.896 "type": "rebuild", 00:19:59.896 "target": "spare", 00:19:59.896 "progress": { 00:19:59.896 "blocks": 153600, 00:19:59.896 "percent": 78 00:19:59.896 } 00:19:59.896 }, 00:19:59.896 "base_bdevs_list": [ 00:19:59.896 { 00:19:59.896 "name": "spare", 00:19:59.896 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:19:59.896 "is_configured": true, 00:19:59.896 "data_offset": 0, 00:19:59.896 "data_size": 65536 00:19:59.896 }, 00:19:59.896 { 00:19:59.896 "name": "BaseBdev2", 00:19:59.896 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:19:59.896 "is_configured": true, 00:19:59.896 "data_offset": 0, 00:19:59.896 "data_size": 65536 00:19:59.896 }, 00:19:59.896 { 00:19:59.896 "name": "BaseBdev3", 00:19:59.896 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:19:59.896 "is_configured": true, 00:19:59.896 "data_offset": 0, 00:19:59.896 "data_size": 65536 00:19:59.896 }, 00:19:59.896 { 00:19:59.896 "name": "BaseBdev4", 00:19:59.896 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:19:59.896 "is_configured": true, 00:19:59.896 "data_offset": 0, 00:19:59.896 "data_size": 65536 00:19:59.896 } 00:19:59.896 ] 00:19:59.896 }' 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.896 06:29:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.833 "name": "raid_bdev1", 00:20:00.833 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:20:00.833 "strip_size_kb": 64, 00:20:00.833 "state": "online", 00:20:00.833 "raid_level": "raid5f", 00:20:00.833 "superblock": false, 00:20:00.833 "num_base_bdevs": 4, 00:20:00.833 "num_base_bdevs_discovered": 4, 00:20:00.833 "num_base_bdevs_operational": 4, 00:20:00.833 "process": { 00:20:00.833 "type": "rebuild", 00:20:00.833 "target": "spare", 00:20:00.833 "progress": { 00:20:00.833 "blocks": 174720, 00:20:00.833 "percent": 88 00:20:00.833 } 00:20:00.833 }, 00:20:00.833 "base_bdevs_list": [ 00:20:00.833 { 00:20:00.833 "name": "spare", 00:20:00.833 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:20:00.833 "is_configured": true, 00:20:00.833 "data_offset": 0, 00:20:00.833 "data_size": 65536 00:20:00.833 }, 00:20:00.833 { 00:20:00.833 "name": "BaseBdev2", 00:20:00.833 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:20:00.833 "is_configured": true, 00:20:00.833 "data_offset": 0, 00:20:00.833 "data_size": 65536 00:20:00.833 }, 00:20:00.833 { 00:20:00.833 "name": "BaseBdev3", 00:20:00.833 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:20:00.833 "is_configured": true, 00:20:00.833 "data_offset": 0, 00:20:00.833 "data_size": 65536 00:20:00.833 }, 00:20:00.833 { 00:20:00.833 "name": "BaseBdev4", 00:20:00.833 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:20:00.833 "is_configured": true, 00:20:00.833 "data_offset": 0, 00:20:00.833 "data_size": 65536 00:20:00.833 } 00:20:00.833 ] 00:20:00.833 }' 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.833 06:29:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:02.212 [2024-11-26 06:29:45.906292] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:02.212 [2024-11-26 06:29:45.906405] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:02.212 [2024-11-26 06:29:45.906500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.212 "name": "raid_bdev1", 00:20:02.212 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:20:02.212 "strip_size_kb": 64, 00:20:02.212 "state": "online", 00:20:02.212 "raid_level": "raid5f", 00:20:02.212 "superblock": false, 00:20:02.212 "num_base_bdevs": 4, 00:20:02.212 "num_base_bdevs_discovered": 4, 00:20:02.212 "num_base_bdevs_operational": 4, 00:20:02.212 "base_bdevs_list": [ 00:20:02.212 { 00:20:02.212 "name": "spare", 00:20:02.212 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 }, 00:20:02.212 { 00:20:02.212 "name": "BaseBdev2", 00:20:02.212 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 }, 00:20:02.212 { 00:20:02.212 "name": "BaseBdev3", 00:20:02.212 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 }, 00:20:02.212 { 00:20:02.212 "name": "BaseBdev4", 00:20:02.212 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 } 00:20:02.212 ] 00:20:02.212 }' 00:20:02.212 06:29:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.212 "name": "raid_bdev1", 00:20:02.212 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:20:02.212 "strip_size_kb": 64, 00:20:02.212 "state": "online", 00:20:02.212 "raid_level": "raid5f", 00:20:02.212 "superblock": false, 00:20:02.212 "num_base_bdevs": 4, 00:20:02.212 "num_base_bdevs_discovered": 4, 00:20:02.212 "num_base_bdevs_operational": 4, 00:20:02.212 "base_bdevs_list": [ 00:20:02.212 { 00:20:02.212 "name": "spare", 00:20:02.212 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 }, 00:20:02.212 { 00:20:02.212 "name": "BaseBdev2", 00:20:02.212 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 }, 00:20:02.212 { 00:20:02.212 "name": "BaseBdev3", 00:20:02.212 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 }, 00:20:02.212 { 00:20:02.212 "name": "BaseBdev4", 00:20:02.212 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:20:02.212 "is_configured": true, 00:20:02.212 "data_offset": 0, 00:20:02.212 "data_size": 65536 00:20:02.212 } 00:20:02.212 ] 00:20:02.212 }' 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.212 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.213 "name": "raid_bdev1", 00:20:02.213 "uuid": "7d76711f-c8b9-4418-97a0-6243e96e22fd", 00:20:02.213 "strip_size_kb": 64, 00:20:02.213 "state": "online", 00:20:02.213 "raid_level": "raid5f", 00:20:02.213 "superblock": false, 00:20:02.213 "num_base_bdevs": 4, 00:20:02.213 "num_base_bdevs_discovered": 4, 00:20:02.213 "num_base_bdevs_operational": 4, 00:20:02.213 "base_bdevs_list": [ 00:20:02.213 { 00:20:02.213 "name": "spare", 00:20:02.213 "uuid": "bb841347-efff-52f4-bbdb-2abc03d9ee40", 00:20:02.213 "is_configured": true, 00:20:02.213 "data_offset": 0, 00:20:02.213 "data_size": 65536 00:20:02.213 }, 00:20:02.213 { 00:20:02.213 "name": "BaseBdev2", 00:20:02.213 "uuid": "f550eed2-6d8b-585a-bfa8-aeb940299527", 00:20:02.213 "is_configured": true, 00:20:02.213 "data_offset": 0, 00:20:02.213 "data_size": 65536 00:20:02.213 }, 00:20:02.213 { 00:20:02.213 "name": "BaseBdev3", 00:20:02.213 "uuid": "e68b83d9-0bbf-5d14-b506-0745b73a8648", 00:20:02.213 "is_configured": true, 00:20:02.213 "data_offset": 0, 00:20:02.213 "data_size": 65536 00:20:02.213 }, 00:20:02.213 { 00:20:02.213 "name": "BaseBdev4", 00:20:02.213 "uuid": "c4c7dff3-bbdd-597c-95bf-4654011bd77f", 00:20:02.213 "is_configured": true, 00:20:02.213 "data_offset": 0, 00:20:02.213 "data_size": 65536 00:20:02.213 } 00:20:02.213 ] 00:20:02.213 }' 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.213 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.783 [2024-11-26 06:29:46.643361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:02.783 [2024-11-26 06:29:46.643411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:02.783 [2024-11-26 06:29:46.643527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:02.783 [2024-11-26 06:29:46.643645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:02.783 [2024-11-26 06:29:46.643658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:02.783 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:02.783 /dev/nbd0 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.042 1+0 records in 00:20:03.042 1+0 records out 00:20:03.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436996 s, 9.4 MB/s 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.042 06:29:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:03.042 /dev/nbd1 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.300 1+0 records in 00:20:03.300 1+0 records out 00:20:03.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307965 s, 13.3 MB/s 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.300 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.560 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85173 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85173 ']' 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85173 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85173 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.819 killing process with pid 85173 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85173' 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 85173 00:20:03.819 Received shutdown signal, test time was about 60.000000 seconds 00:20:03.819 00:20:03.819 Latency(us) 00:20:03.819 [2024-11-26T06:29:47.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.819 [2024-11-26T06:29:47.956Z] =================================================================================================================== 00:20:03.819 [2024-11-26T06:29:47.956Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:03.819 [2024-11-26 06:29:47.899469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.819 06:29:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 85173 00:20:04.387 [2024-11-26 06:29:48.436250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:20:05.767 00:20:05.767 real 0m19.373s 00:20:05.767 user 0m23.067s 00:20:05.767 sys 0m2.504s 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 ************************************ 00:20:05.767 END TEST raid5f_rebuild_test 00:20:05.767 ************************************ 00:20:05.767 06:29:49 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:20:05.767 06:29:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:05.767 06:29:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.767 06:29:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 ************************************ 00:20:05.767 START TEST raid5f_rebuild_test_sb 00:20:05.767 ************************************ 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:05.767 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85676 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85676 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85676 ']' 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.768 06:29:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.768 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:05.768 Zero copy mechanism will not be used. 00:20:05.768 [2024-11-26 06:29:49.788223] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:20:05.768 [2024-11-26 06:29:49.788341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85676 ] 00:20:06.027 [2024-11-26 06:29:49.961610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.027 [2024-11-26 06:29:50.102190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.286 [2024-11-26 06:29:50.344191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.286 [2024-11-26 06:29:50.344239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:06.546 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.546 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:06.546 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.546 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:06.546 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.546 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 BaseBdev1_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 [2024-11-26 06:29:50.692816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:06.806 [2024-11-26 06:29:50.692899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.806 [2024-11-26 06:29:50.692925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:06.806 [2024-11-26 06:29:50.692938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.806 [2024-11-26 06:29:50.695406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.806 [2024-11-26 06:29:50.695447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:06.806 BaseBdev1 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 BaseBdev2_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 [2024-11-26 06:29:50.752341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:06.806 [2024-11-26 06:29:50.752419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.806 [2024-11-26 06:29:50.752442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:06.806 [2024-11-26 06:29:50.752456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.806 [2024-11-26 06:29:50.755035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.806 [2024-11-26 06:29:50.755093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:06.806 BaseBdev2 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 BaseBdev3_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 [2024-11-26 06:29:50.828998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:06.806 [2024-11-26 06:29:50.829143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.806 [2024-11-26 06:29:50.829175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:06.806 [2024-11-26 06:29:50.829188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.806 [2024-11-26 06:29:50.831611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.806 [2024-11-26 06:29:50.831652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:06.806 BaseBdev3 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 BaseBdev4_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.806 [2024-11-26 06:29:50.891527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:06.806 [2024-11-26 06:29:50.891638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.806 [2024-11-26 06:29:50.891681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:06.806 [2024-11-26 06:29:50.891693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.806 [2024-11-26 06:29:50.894206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.806 [2024-11-26 06:29:50.894245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:06.806 BaseBdev4 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.806 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.066 spare_malloc 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.066 spare_delay 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.066 [2024-11-26 06:29:50.966867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.066 [2024-11-26 06:29:50.966933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.066 [2024-11-26 06:29:50.966955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:07.066 [2024-11-26 06:29:50.966967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.066 [2024-11-26 06:29:50.969599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.066 [2024-11-26 06:29:50.969688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.066 spare 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.066 [2024-11-26 06:29:50.978905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:07.066 [2024-11-26 06:29:50.981060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:07.066 [2024-11-26 06:29:50.981141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:07.066 [2024-11-26 06:29:50.981192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:07.066 [2024-11-26 06:29:50.981409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:07.066 [2024-11-26 06:29:50.981433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:07.066 [2024-11-26 06:29:50.981708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:07.066 [2024-11-26 06:29:50.989566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:07.066 [2024-11-26 06:29:50.989586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:07.066 [2024-11-26 06:29:50.989817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.066 06:29:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.066 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.066 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.066 "name": "raid_bdev1", 00:20:07.066 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:07.066 "strip_size_kb": 64, 00:20:07.066 "state": "online", 00:20:07.066 "raid_level": "raid5f", 00:20:07.066 "superblock": true, 00:20:07.066 "num_base_bdevs": 4, 00:20:07.066 "num_base_bdevs_discovered": 4, 00:20:07.066 "num_base_bdevs_operational": 4, 00:20:07.066 "base_bdevs_list": [ 00:20:07.066 { 00:20:07.066 "name": "BaseBdev1", 00:20:07.066 "uuid": "ba70a146-b540-57d2-87ff-c8e51746e1fc", 00:20:07.066 "is_configured": true, 00:20:07.066 "data_offset": 2048, 00:20:07.066 "data_size": 63488 00:20:07.066 }, 00:20:07.066 { 00:20:07.066 "name": "BaseBdev2", 00:20:07.066 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:07.066 "is_configured": true, 00:20:07.066 "data_offset": 2048, 00:20:07.066 "data_size": 63488 00:20:07.066 }, 00:20:07.066 { 00:20:07.066 "name": "BaseBdev3", 00:20:07.066 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:07.066 "is_configured": true, 00:20:07.066 "data_offset": 2048, 00:20:07.066 "data_size": 63488 00:20:07.066 }, 00:20:07.066 { 00:20:07.066 "name": "BaseBdev4", 00:20:07.066 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:07.066 "is_configured": true, 00:20:07.066 "data_offset": 2048, 00:20:07.066 "data_size": 63488 00:20:07.066 } 00:20:07.066 ] 00:20:07.066 }' 00:20:07.066 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.066 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.636 [2024-11-26 06:29:51.491144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.636 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:07.896 [2024-11-26 06:29:51.790430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:07.896 /dev/nbd0 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:07.896 1+0 records in 00:20:07.896 1+0 records out 00:20:07.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385812 s, 10.6 MB/s 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:07.896 06:29:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:20:08.465 496+0 records in 00:20:08.465 496+0 records out 00:20:08.465 97517568 bytes (98 MB, 93 MiB) copied, 0.518347 s, 188 MB/s 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:08.465 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:08.724 [2024-11-26 06:29:52.616385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 [2024-11-26 06:29:52.636996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.724 "name": "raid_bdev1", 00:20:08.724 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:08.724 "strip_size_kb": 64, 00:20:08.724 "state": "online", 00:20:08.724 "raid_level": "raid5f", 00:20:08.724 "superblock": true, 00:20:08.724 "num_base_bdevs": 4, 00:20:08.724 "num_base_bdevs_discovered": 3, 00:20:08.724 "num_base_bdevs_operational": 3, 00:20:08.724 "base_bdevs_list": [ 00:20:08.724 { 00:20:08.724 "name": null, 00:20:08.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.724 "is_configured": false, 00:20:08.724 "data_offset": 0, 00:20:08.724 "data_size": 63488 00:20:08.724 }, 00:20:08.724 { 00:20:08.724 "name": "BaseBdev2", 00:20:08.724 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:08.724 "is_configured": true, 00:20:08.724 "data_offset": 2048, 00:20:08.724 "data_size": 63488 00:20:08.724 }, 00:20:08.724 { 00:20:08.724 "name": "BaseBdev3", 00:20:08.724 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:08.724 "is_configured": true, 00:20:08.724 "data_offset": 2048, 00:20:08.724 "data_size": 63488 00:20:08.724 }, 00:20:08.724 { 00:20:08.724 "name": "BaseBdev4", 00:20:08.724 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:08.724 "is_configured": true, 00:20:08.724 "data_offset": 2048, 00:20:08.724 "data_size": 63488 00:20:08.724 } 00:20:08.724 ] 00:20:08.724 }' 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.724 06:29:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.292 06:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:09.292 06:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.292 06:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.292 [2024-11-26 06:29:53.132156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:09.292 [2024-11-26 06:29:53.148520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:20:09.292 06:29:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.292 06:29:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:09.292 [2024-11-26 06:29:53.157961] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.229 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.229 "name": "raid_bdev1", 00:20:10.229 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:10.229 "strip_size_kb": 64, 00:20:10.229 "state": "online", 00:20:10.229 "raid_level": "raid5f", 00:20:10.229 "superblock": true, 00:20:10.229 "num_base_bdevs": 4, 00:20:10.229 "num_base_bdevs_discovered": 4, 00:20:10.229 "num_base_bdevs_operational": 4, 00:20:10.229 "process": { 00:20:10.229 "type": "rebuild", 00:20:10.229 "target": "spare", 00:20:10.229 "progress": { 00:20:10.229 "blocks": 19200, 00:20:10.229 "percent": 10 00:20:10.229 } 00:20:10.229 }, 00:20:10.229 "base_bdevs_list": [ 00:20:10.229 { 00:20:10.229 "name": "spare", 00:20:10.230 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:10.230 "is_configured": true, 00:20:10.230 "data_offset": 2048, 00:20:10.230 "data_size": 63488 00:20:10.230 }, 00:20:10.230 { 00:20:10.230 "name": "BaseBdev2", 00:20:10.230 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:10.230 "is_configured": true, 00:20:10.230 "data_offset": 2048, 00:20:10.230 "data_size": 63488 00:20:10.230 }, 00:20:10.230 { 00:20:10.230 "name": "BaseBdev3", 00:20:10.230 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:10.230 "is_configured": true, 00:20:10.230 "data_offset": 2048, 00:20:10.230 "data_size": 63488 00:20:10.230 }, 00:20:10.230 { 00:20:10.230 "name": "BaseBdev4", 00:20:10.230 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:10.230 "is_configured": true, 00:20:10.230 "data_offset": 2048, 00:20:10.230 "data_size": 63488 00:20:10.230 } 00:20:10.230 ] 00:20:10.230 }' 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.230 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.230 [2024-11-26 06:29:54.297580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.489 [2024-11-26 06:29:54.369703] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:10.489 [2024-11-26 06:29:54.369796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.489 [2024-11-26 06:29:54.369817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:10.489 [2024-11-26 06:29:54.369830] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.489 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.489 "name": "raid_bdev1", 00:20:10.489 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:10.489 "strip_size_kb": 64, 00:20:10.489 "state": "online", 00:20:10.489 "raid_level": "raid5f", 00:20:10.489 "superblock": true, 00:20:10.489 "num_base_bdevs": 4, 00:20:10.489 "num_base_bdevs_discovered": 3, 00:20:10.489 "num_base_bdevs_operational": 3, 00:20:10.489 "base_bdevs_list": [ 00:20:10.489 { 00:20:10.489 "name": null, 00:20:10.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.489 "is_configured": false, 00:20:10.489 "data_offset": 0, 00:20:10.489 "data_size": 63488 00:20:10.489 }, 00:20:10.489 { 00:20:10.489 "name": "BaseBdev2", 00:20:10.489 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:10.489 "is_configured": true, 00:20:10.489 "data_offset": 2048, 00:20:10.489 "data_size": 63488 00:20:10.489 }, 00:20:10.489 { 00:20:10.489 "name": "BaseBdev3", 00:20:10.489 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:10.489 "is_configured": true, 00:20:10.489 "data_offset": 2048, 00:20:10.489 "data_size": 63488 00:20:10.489 }, 00:20:10.489 { 00:20:10.490 "name": "BaseBdev4", 00:20:10.490 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:10.490 "is_configured": true, 00:20:10.490 "data_offset": 2048, 00:20:10.490 "data_size": 63488 00:20:10.490 } 00:20:10.490 ] 00:20:10.490 }' 00:20:10.490 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.490 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.750 "name": "raid_bdev1", 00:20:10.750 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:10.750 "strip_size_kb": 64, 00:20:10.750 "state": "online", 00:20:10.750 "raid_level": "raid5f", 00:20:10.750 "superblock": true, 00:20:10.750 "num_base_bdevs": 4, 00:20:10.750 "num_base_bdevs_discovered": 3, 00:20:10.750 "num_base_bdevs_operational": 3, 00:20:10.750 "base_bdevs_list": [ 00:20:10.750 { 00:20:10.750 "name": null, 00:20:10.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.750 "is_configured": false, 00:20:10.750 "data_offset": 0, 00:20:10.750 "data_size": 63488 00:20:10.750 }, 00:20:10.750 { 00:20:10.750 "name": "BaseBdev2", 00:20:10.750 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:10.750 "is_configured": true, 00:20:10.750 "data_offset": 2048, 00:20:10.750 "data_size": 63488 00:20:10.750 }, 00:20:10.750 { 00:20:10.750 "name": "BaseBdev3", 00:20:10.750 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:10.750 "is_configured": true, 00:20:10.750 "data_offset": 2048, 00:20:10.750 "data_size": 63488 00:20:10.750 }, 00:20:10.750 { 00:20:10.750 "name": "BaseBdev4", 00:20:10.750 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:10.750 "is_configured": true, 00:20:10.750 "data_offset": 2048, 00:20:10.750 "data_size": 63488 00:20:10.750 } 00:20:10.750 ] 00:20:10.750 }' 00:20:10.750 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.010 [2024-11-26 06:29:54.981621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.010 [2024-11-26 06:29:54.997289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.010 06:29:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:11.010 [2024-11-26 06:29:55.006547] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.951 "name": "raid_bdev1", 00:20:11.951 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:11.951 "strip_size_kb": 64, 00:20:11.951 "state": "online", 00:20:11.951 "raid_level": "raid5f", 00:20:11.951 "superblock": true, 00:20:11.951 "num_base_bdevs": 4, 00:20:11.951 "num_base_bdevs_discovered": 4, 00:20:11.951 "num_base_bdevs_operational": 4, 00:20:11.951 "process": { 00:20:11.951 "type": "rebuild", 00:20:11.951 "target": "spare", 00:20:11.951 "progress": { 00:20:11.951 "blocks": 17280, 00:20:11.951 "percent": 9 00:20:11.951 } 00:20:11.951 }, 00:20:11.951 "base_bdevs_list": [ 00:20:11.951 { 00:20:11.951 "name": "spare", 00:20:11.951 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:11.951 "is_configured": true, 00:20:11.951 "data_offset": 2048, 00:20:11.951 "data_size": 63488 00:20:11.951 }, 00:20:11.951 { 00:20:11.951 "name": "BaseBdev2", 00:20:11.951 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:11.951 "is_configured": true, 00:20:11.951 "data_offset": 2048, 00:20:11.951 "data_size": 63488 00:20:11.951 }, 00:20:11.951 { 00:20:11.951 "name": "BaseBdev3", 00:20:11.951 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:11.951 "is_configured": true, 00:20:11.951 "data_offset": 2048, 00:20:11.951 "data_size": 63488 00:20:11.951 }, 00:20:11.951 { 00:20:11.951 "name": "BaseBdev4", 00:20:11.951 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:11.951 "is_configured": true, 00:20:11.951 "data_offset": 2048, 00:20:11.951 "data_size": 63488 00:20:11.951 } 00:20:11.951 ] 00:20:11.951 }' 00:20:11.951 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:12.211 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.211 "name": "raid_bdev1", 00:20:12.211 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:12.211 "strip_size_kb": 64, 00:20:12.211 "state": "online", 00:20:12.211 "raid_level": "raid5f", 00:20:12.211 "superblock": true, 00:20:12.211 "num_base_bdevs": 4, 00:20:12.211 "num_base_bdevs_discovered": 4, 00:20:12.211 "num_base_bdevs_operational": 4, 00:20:12.211 "process": { 00:20:12.211 "type": "rebuild", 00:20:12.211 "target": "spare", 00:20:12.211 "progress": { 00:20:12.211 "blocks": 21120, 00:20:12.211 "percent": 11 00:20:12.211 } 00:20:12.211 }, 00:20:12.211 "base_bdevs_list": [ 00:20:12.211 { 00:20:12.211 "name": "spare", 00:20:12.211 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:12.211 "is_configured": true, 00:20:12.211 "data_offset": 2048, 00:20:12.211 "data_size": 63488 00:20:12.211 }, 00:20:12.211 { 00:20:12.211 "name": "BaseBdev2", 00:20:12.211 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:12.211 "is_configured": true, 00:20:12.211 "data_offset": 2048, 00:20:12.211 "data_size": 63488 00:20:12.211 }, 00:20:12.211 { 00:20:12.211 "name": "BaseBdev3", 00:20:12.211 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:12.211 "is_configured": true, 00:20:12.211 "data_offset": 2048, 00:20:12.211 "data_size": 63488 00:20:12.211 }, 00:20:12.211 { 00:20:12.211 "name": "BaseBdev4", 00:20:12.211 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:12.211 "is_configured": true, 00:20:12.211 "data_offset": 2048, 00:20:12.211 "data_size": 63488 00:20:12.211 } 00:20:12.211 ] 00:20:12.211 }' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:12.211 06:29:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.217 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.476 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.476 "name": "raid_bdev1", 00:20:13.476 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:13.476 "strip_size_kb": 64, 00:20:13.476 "state": "online", 00:20:13.476 "raid_level": "raid5f", 00:20:13.476 "superblock": true, 00:20:13.476 "num_base_bdevs": 4, 00:20:13.476 "num_base_bdevs_discovered": 4, 00:20:13.476 "num_base_bdevs_operational": 4, 00:20:13.476 "process": { 00:20:13.476 "type": "rebuild", 00:20:13.476 "target": "spare", 00:20:13.476 "progress": { 00:20:13.476 "blocks": 42240, 00:20:13.476 "percent": 22 00:20:13.476 } 00:20:13.476 }, 00:20:13.476 "base_bdevs_list": [ 00:20:13.476 { 00:20:13.476 "name": "spare", 00:20:13.476 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:13.476 "is_configured": true, 00:20:13.476 "data_offset": 2048, 00:20:13.476 "data_size": 63488 00:20:13.476 }, 00:20:13.476 { 00:20:13.476 "name": "BaseBdev2", 00:20:13.476 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:13.476 "is_configured": true, 00:20:13.476 "data_offset": 2048, 00:20:13.476 "data_size": 63488 00:20:13.476 }, 00:20:13.476 { 00:20:13.476 "name": "BaseBdev3", 00:20:13.476 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:13.476 "is_configured": true, 00:20:13.476 "data_offset": 2048, 00:20:13.476 "data_size": 63488 00:20:13.476 }, 00:20:13.476 { 00:20:13.476 "name": "BaseBdev4", 00:20:13.476 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:13.476 "is_configured": true, 00:20:13.476 "data_offset": 2048, 00:20:13.476 "data_size": 63488 00:20:13.476 } 00:20:13.476 ] 00:20:13.476 }' 00:20:13.476 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.476 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.476 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.476 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.476 06:29:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.413 "name": "raid_bdev1", 00:20:14.413 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:14.413 "strip_size_kb": 64, 00:20:14.413 "state": "online", 00:20:14.413 "raid_level": "raid5f", 00:20:14.413 "superblock": true, 00:20:14.413 "num_base_bdevs": 4, 00:20:14.413 "num_base_bdevs_discovered": 4, 00:20:14.413 "num_base_bdevs_operational": 4, 00:20:14.413 "process": { 00:20:14.413 "type": "rebuild", 00:20:14.413 "target": "spare", 00:20:14.413 "progress": { 00:20:14.413 "blocks": 65280, 00:20:14.413 "percent": 34 00:20:14.413 } 00:20:14.413 }, 00:20:14.413 "base_bdevs_list": [ 00:20:14.413 { 00:20:14.413 "name": "spare", 00:20:14.413 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:14.413 "is_configured": true, 00:20:14.413 "data_offset": 2048, 00:20:14.413 "data_size": 63488 00:20:14.413 }, 00:20:14.413 { 00:20:14.413 "name": "BaseBdev2", 00:20:14.413 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:14.413 "is_configured": true, 00:20:14.413 "data_offset": 2048, 00:20:14.413 "data_size": 63488 00:20:14.413 }, 00:20:14.413 { 00:20:14.413 "name": "BaseBdev3", 00:20:14.413 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:14.413 "is_configured": true, 00:20:14.413 "data_offset": 2048, 00:20:14.413 "data_size": 63488 00:20:14.413 }, 00:20:14.413 { 00:20:14.413 "name": "BaseBdev4", 00:20:14.413 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:14.413 "is_configured": true, 00:20:14.413 "data_offset": 2048, 00:20:14.413 "data_size": 63488 00:20:14.413 } 00:20:14.413 ] 00:20:14.413 }' 00:20:14.413 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.671 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.671 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.672 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.672 06:29:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.610 "name": "raid_bdev1", 00:20:15.610 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:15.610 "strip_size_kb": 64, 00:20:15.610 "state": "online", 00:20:15.610 "raid_level": "raid5f", 00:20:15.610 "superblock": true, 00:20:15.610 "num_base_bdevs": 4, 00:20:15.610 "num_base_bdevs_discovered": 4, 00:20:15.610 "num_base_bdevs_operational": 4, 00:20:15.610 "process": { 00:20:15.610 "type": "rebuild", 00:20:15.610 "target": "spare", 00:20:15.610 "progress": { 00:20:15.610 "blocks": 86400, 00:20:15.610 "percent": 45 00:20:15.610 } 00:20:15.610 }, 00:20:15.610 "base_bdevs_list": [ 00:20:15.610 { 00:20:15.610 "name": "spare", 00:20:15.610 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:15.610 "is_configured": true, 00:20:15.610 "data_offset": 2048, 00:20:15.610 "data_size": 63488 00:20:15.610 }, 00:20:15.610 { 00:20:15.610 "name": "BaseBdev2", 00:20:15.610 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:15.610 "is_configured": true, 00:20:15.610 "data_offset": 2048, 00:20:15.610 "data_size": 63488 00:20:15.610 }, 00:20:15.610 { 00:20:15.610 "name": "BaseBdev3", 00:20:15.610 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:15.610 "is_configured": true, 00:20:15.610 "data_offset": 2048, 00:20:15.610 "data_size": 63488 00:20:15.610 }, 00:20:15.610 { 00:20:15.610 "name": "BaseBdev4", 00:20:15.610 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:15.610 "is_configured": true, 00:20:15.610 "data_offset": 2048, 00:20:15.610 "data_size": 63488 00:20:15.610 } 00:20:15.610 ] 00:20:15.610 }' 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.610 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.870 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.870 06:29:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.808 "name": "raid_bdev1", 00:20:16.808 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:16.808 "strip_size_kb": 64, 00:20:16.808 "state": "online", 00:20:16.808 "raid_level": "raid5f", 00:20:16.808 "superblock": true, 00:20:16.808 "num_base_bdevs": 4, 00:20:16.808 "num_base_bdevs_discovered": 4, 00:20:16.808 "num_base_bdevs_operational": 4, 00:20:16.808 "process": { 00:20:16.808 "type": "rebuild", 00:20:16.808 "target": "spare", 00:20:16.808 "progress": { 00:20:16.808 "blocks": 109440, 00:20:16.808 "percent": 57 00:20:16.808 } 00:20:16.808 }, 00:20:16.808 "base_bdevs_list": [ 00:20:16.808 { 00:20:16.808 "name": "spare", 00:20:16.808 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:16.808 "is_configured": true, 00:20:16.808 "data_offset": 2048, 00:20:16.808 "data_size": 63488 00:20:16.808 }, 00:20:16.808 { 00:20:16.808 "name": "BaseBdev2", 00:20:16.808 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:16.808 "is_configured": true, 00:20:16.808 "data_offset": 2048, 00:20:16.808 "data_size": 63488 00:20:16.808 }, 00:20:16.808 { 00:20:16.808 "name": "BaseBdev3", 00:20:16.808 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:16.808 "is_configured": true, 00:20:16.808 "data_offset": 2048, 00:20:16.808 "data_size": 63488 00:20:16.808 }, 00:20:16.808 { 00:20:16.808 "name": "BaseBdev4", 00:20:16.808 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:16.808 "is_configured": true, 00:20:16.808 "data_offset": 2048, 00:20:16.808 "data_size": 63488 00:20:16.808 } 00:20:16.808 ] 00:20:16.808 }' 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.808 06:30:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.190 "name": "raid_bdev1", 00:20:18.190 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:18.190 "strip_size_kb": 64, 00:20:18.190 "state": "online", 00:20:18.190 "raid_level": "raid5f", 00:20:18.190 "superblock": true, 00:20:18.190 "num_base_bdevs": 4, 00:20:18.190 "num_base_bdevs_discovered": 4, 00:20:18.190 "num_base_bdevs_operational": 4, 00:20:18.190 "process": { 00:20:18.190 "type": "rebuild", 00:20:18.190 "target": "spare", 00:20:18.190 "progress": { 00:20:18.190 "blocks": 130560, 00:20:18.190 "percent": 68 00:20:18.190 } 00:20:18.190 }, 00:20:18.190 "base_bdevs_list": [ 00:20:18.190 { 00:20:18.190 "name": "spare", 00:20:18.190 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:18.190 "is_configured": true, 00:20:18.190 "data_offset": 2048, 00:20:18.190 "data_size": 63488 00:20:18.190 }, 00:20:18.190 { 00:20:18.190 "name": "BaseBdev2", 00:20:18.190 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:18.190 "is_configured": true, 00:20:18.190 "data_offset": 2048, 00:20:18.190 "data_size": 63488 00:20:18.190 }, 00:20:18.190 { 00:20:18.190 "name": "BaseBdev3", 00:20:18.190 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:18.190 "is_configured": true, 00:20:18.190 "data_offset": 2048, 00:20:18.190 "data_size": 63488 00:20:18.190 }, 00:20:18.190 { 00:20:18.190 "name": "BaseBdev4", 00:20:18.190 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:18.190 "is_configured": true, 00:20:18.190 "data_offset": 2048, 00:20:18.190 "data_size": 63488 00:20:18.190 } 00:20:18.190 ] 00:20:18.190 }' 00:20:18.190 06:30:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.190 06:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.190 06:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.190 06:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.190 06:30:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.129 "name": "raid_bdev1", 00:20:19.129 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:19.129 "strip_size_kb": 64, 00:20:19.129 "state": "online", 00:20:19.129 "raid_level": "raid5f", 00:20:19.129 "superblock": true, 00:20:19.129 "num_base_bdevs": 4, 00:20:19.129 "num_base_bdevs_discovered": 4, 00:20:19.129 "num_base_bdevs_operational": 4, 00:20:19.129 "process": { 00:20:19.129 "type": "rebuild", 00:20:19.129 "target": "spare", 00:20:19.129 "progress": { 00:20:19.129 "blocks": 153600, 00:20:19.129 "percent": 80 00:20:19.129 } 00:20:19.129 }, 00:20:19.129 "base_bdevs_list": [ 00:20:19.129 { 00:20:19.129 "name": "spare", 00:20:19.129 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:19.129 "is_configured": true, 00:20:19.129 "data_offset": 2048, 00:20:19.129 "data_size": 63488 00:20:19.129 }, 00:20:19.129 { 00:20:19.129 "name": "BaseBdev2", 00:20:19.129 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:19.129 "is_configured": true, 00:20:19.129 "data_offset": 2048, 00:20:19.129 "data_size": 63488 00:20:19.129 }, 00:20:19.129 { 00:20:19.129 "name": "BaseBdev3", 00:20:19.129 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:19.129 "is_configured": true, 00:20:19.129 "data_offset": 2048, 00:20:19.129 "data_size": 63488 00:20:19.129 }, 00:20:19.129 { 00:20:19.129 "name": "BaseBdev4", 00:20:19.129 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:19.129 "is_configured": true, 00:20:19.129 "data_offset": 2048, 00:20:19.129 "data_size": 63488 00:20:19.129 } 00:20:19.129 ] 00:20:19.129 }' 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.129 06:30:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.509 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.509 "name": "raid_bdev1", 00:20:20.509 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:20.509 "strip_size_kb": 64, 00:20:20.509 "state": "online", 00:20:20.509 "raid_level": "raid5f", 00:20:20.509 "superblock": true, 00:20:20.509 "num_base_bdevs": 4, 00:20:20.509 "num_base_bdevs_discovered": 4, 00:20:20.509 "num_base_bdevs_operational": 4, 00:20:20.509 "process": { 00:20:20.509 "type": "rebuild", 00:20:20.509 "target": "spare", 00:20:20.509 "progress": { 00:20:20.509 "blocks": 174720, 00:20:20.509 "percent": 91 00:20:20.509 } 00:20:20.509 }, 00:20:20.509 "base_bdevs_list": [ 00:20:20.509 { 00:20:20.509 "name": "spare", 00:20:20.509 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:20.509 "is_configured": true, 00:20:20.510 "data_offset": 2048, 00:20:20.510 "data_size": 63488 00:20:20.510 }, 00:20:20.510 { 00:20:20.510 "name": "BaseBdev2", 00:20:20.510 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:20.510 "is_configured": true, 00:20:20.510 "data_offset": 2048, 00:20:20.510 "data_size": 63488 00:20:20.510 }, 00:20:20.510 { 00:20:20.510 "name": "BaseBdev3", 00:20:20.510 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:20.510 "is_configured": true, 00:20:20.510 "data_offset": 2048, 00:20:20.510 "data_size": 63488 00:20:20.510 }, 00:20:20.510 { 00:20:20.510 "name": "BaseBdev4", 00:20:20.510 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:20.510 "is_configured": true, 00:20:20.510 "data_offset": 2048, 00:20:20.510 "data_size": 63488 00:20:20.510 } 00:20:20.510 ] 00:20:20.510 }' 00:20:20.510 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.510 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.510 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.510 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.510 06:30:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:21.079 [2024-11-26 06:30:05.092767] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:21.079 [2024-11-26 06:30:05.092867] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:21.079 [2024-11-26 06:30:05.093050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.338 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.338 "name": "raid_bdev1", 00:20:21.338 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:21.338 "strip_size_kb": 64, 00:20:21.338 "state": "online", 00:20:21.338 "raid_level": "raid5f", 00:20:21.338 "superblock": true, 00:20:21.338 "num_base_bdevs": 4, 00:20:21.338 "num_base_bdevs_discovered": 4, 00:20:21.338 "num_base_bdevs_operational": 4, 00:20:21.338 "base_bdevs_list": [ 00:20:21.338 { 00:20:21.338 "name": "spare", 00:20:21.338 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:21.338 "is_configured": true, 00:20:21.338 "data_offset": 2048, 00:20:21.338 "data_size": 63488 00:20:21.338 }, 00:20:21.338 { 00:20:21.338 "name": "BaseBdev2", 00:20:21.338 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:21.338 "is_configured": true, 00:20:21.338 "data_offset": 2048, 00:20:21.338 "data_size": 63488 00:20:21.338 }, 00:20:21.338 { 00:20:21.338 "name": "BaseBdev3", 00:20:21.338 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:21.338 "is_configured": true, 00:20:21.338 "data_offset": 2048, 00:20:21.338 "data_size": 63488 00:20:21.338 }, 00:20:21.338 { 00:20:21.338 "name": "BaseBdev4", 00:20:21.338 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:21.338 "is_configured": true, 00:20:21.339 "data_offset": 2048, 00:20:21.339 "data_size": 63488 00:20:21.339 } 00:20:21.339 ] 00:20:21.339 }' 00:20:21.339 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.339 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:21.339 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.598 "name": "raid_bdev1", 00:20:21.598 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:21.598 "strip_size_kb": 64, 00:20:21.598 "state": "online", 00:20:21.598 "raid_level": "raid5f", 00:20:21.598 "superblock": true, 00:20:21.598 "num_base_bdevs": 4, 00:20:21.598 "num_base_bdevs_discovered": 4, 00:20:21.598 "num_base_bdevs_operational": 4, 00:20:21.598 "base_bdevs_list": [ 00:20:21.598 { 00:20:21.598 "name": "spare", 00:20:21.598 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:21.598 "is_configured": true, 00:20:21.598 "data_offset": 2048, 00:20:21.598 "data_size": 63488 00:20:21.598 }, 00:20:21.598 { 00:20:21.598 "name": "BaseBdev2", 00:20:21.598 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:21.598 "is_configured": true, 00:20:21.598 "data_offset": 2048, 00:20:21.598 "data_size": 63488 00:20:21.598 }, 00:20:21.598 { 00:20:21.598 "name": "BaseBdev3", 00:20:21.598 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:21.598 "is_configured": true, 00:20:21.598 "data_offset": 2048, 00:20:21.598 "data_size": 63488 00:20:21.598 }, 00:20:21.598 { 00:20:21.598 "name": "BaseBdev4", 00:20:21.598 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:21.598 "is_configured": true, 00:20:21.598 "data_offset": 2048, 00:20:21.598 "data_size": 63488 00:20:21.598 } 00:20:21.598 ] 00:20:21.598 }' 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.598 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.598 "name": "raid_bdev1", 00:20:21.598 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:21.598 "strip_size_kb": 64, 00:20:21.598 "state": "online", 00:20:21.598 "raid_level": "raid5f", 00:20:21.598 "superblock": true, 00:20:21.598 "num_base_bdevs": 4, 00:20:21.598 "num_base_bdevs_discovered": 4, 00:20:21.598 "num_base_bdevs_operational": 4, 00:20:21.598 "base_bdevs_list": [ 00:20:21.598 { 00:20:21.598 "name": "spare", 00:20:21.598 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:21.598 "is_configured": true, 00:20:21.598 "data_offset": 2048, 00:20:21.598 "data_size": 63488 00:20:21.598 }, 00:20:21.598 { 00:20:21.599 "name": "BaseBdev2", 00:20:21.599 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:21.599 "is_configured": true, 00:20:21.599 "data_offset": 2048, 00:20:21.599 "data_size": 63488 00:20:21.599 }, 00:20:21.599 { 00:20:21.599 "name": "BaseBdev3", 00:20:21.599 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:21.599 "is_configured": true, 00:20:21.599 "data_offset": 2048, 00:20:21.599 "data_size": 63488 00:20:21.599 }, 00:20:21.599 { 00:20:21.599 "name": "BaseBdev4", 00:20:21.599 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:21.599 "is_configured": true, 00:20:21.599 "data_offset": 2048, 00:20:21.599 "data_size": 63488 00:20:21.599 } 00:20:21.599 ] 00:20:21.599 }' 00:20:21.599 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.599 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.167 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:22.167 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.167 06:30:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.167 [2024-11-26 06:30:06.005958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.167 [2024-11-26 06:30:06.006064] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.167 [2024-11-26 06:30:06.006209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.167 [2024-11-26 06:30:06.006357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.167 [2024-11-26 06:30:06.006427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.167 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:22.167 /dev/nbd0 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.426 1+0 records in 00:20:22.426 1+0 records out 00:20:22.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508678 s, 8.1 MB/s 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.426 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:22.426 /dev/nbd1 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.685 1+0 records in 00:20:22.685 1+0 records out 00:20:22.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326777 s, 12.5 MB/s 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.685 06:30:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.944 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.945 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.204 [2024-11-26 06:30:07.278747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:23.204 [2024-11-26 06:30:07.278830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.204 [2024-11-26 06:30:07.278862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:23.204 [2024-11-26 06:30:07.278872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.204 [2024-11-26 06:30:07.281759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.204 [2024-11-26 06:30:07.281800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:23.204 [2024-11-26 06:30:07.281927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:23.204 [2024-11-26 06:30:07.281984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:23.204 [2024-11-26 06:30:07.282159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.204 [2024-11-26 06:30:07.282260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:23.204 [2024-11-26 06:30:07.282407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:23.204 spare 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.204 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.463 [2024-11-26 06:30:07.382341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:23.463 [2024-11-26 06:30:07.382401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:23.463 [2024-11-26 06:30:07.382792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:23.463 [2024-11-26 06:30:07.391013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:23.464 [2024-11-26 06:30:07.391043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:23.464 [2024-11-26 06:30:07.391335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.464 "name": "raid_bdev1", 00:20:23.464 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:23.464 "strip_size_kb": 64, 00:20:23.464 "state": "online", 00:20:23.464 "raid_level": "raid5f", 00:20:23.464 "superblock": true, 00:20:23.464 "num_base_bdevs": 4, 00:20:23.464 "num_base_bdevs_discovered": 4, 00:20:23.464 "num_base_bdevs_operational": 4, 00:20:23.464 "base_bdevs_list": [ 00:20:23.464 { 00:20:23.464 "name": "spare", 00:20:23.464 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:23.464 "is_configured": true, 00:20:23.464 "data_offset": 2048, 00:20:23.464 "data_size": 63488 00:20:23.464 }, 00:20:23.464 { 00:20:23.464 "name": "BaseBdev2", 00:20:23.464 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:23.464 "is_configured": true, 00:20:23.464 "data_offset": 2048, 00:20:23.464 "data_size": 63488 00:20:23.464 }, 00:20:23.464 { 00:20:23.464 "name": "BaseBdev3", 00:20:23.464 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:23.464 "is_configured": true, 00:20:23.464 "data_offset": 2048, 00:20:23.464 "data_size": 63488 00:20:23.464 }, 00:20:23.464 { 00:20:23.464 "name": "BaseBdev4", 00:20:23.464 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:23.464 "is_configured": true, 00:20:23.464 "data_offset": 2048, 00:20:23.464 "data_size": 63488 00:20:23.464 } 00:20:23.464 ] 00:20:23.464 }' 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.464 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.033 "name": "raid_bdev1", 00:20:24.033 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:24.033 "strip_size_kb": 64, 00:20:24.033 "state": "online", 00:20:24.033 "raid_level": "raid5f", 00:20:24.033 "superblock": true, 00:20:24.033 "num_base_bdevs": 4, 00:20:24.033 "num_base_bdevs_discovered": 4, 00:20:24.033 "num_base_bdevs_operational": 4, 00:20:24.033 "base_bdevs_list": [ 00:20:24.033 { 00:20:24.033 "name": "spare", 00:20:24.033 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev2", 00:20:24.033 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev3", 00:20:24.033 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev4", 00:20:24.033 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 } 00:20:24.033 ] 00:20:24.033 }' 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.033 06:30:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 [2024-11-26 06:30:08.076681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.033 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.033 "name": "raid_bdev1", 00:20:24.033 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:24.033 "strip_size_kb": 64, 00:20:24.033 "state": "online", 00:20:24.033 "raid_level": "raid5f", 00:20:24.033 "superblock": true, 00:20:24.033 "num_base_bdevs": 4, 00:20:24.033 "num_base_bdevs_discovered": 3, 00:20:24.033 "num_base_bdevs_operational": 3, 00:20:24.033 "base_bdevs_list": [ 00:20:24.033 { 00:20:24.033 "name": null, 00:20:24.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.033 "is_configured": false, 00:20:24.033 "data_offset": 0, 00:20:24.033 "data_size": 63488 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev2", 00:20:24.033 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev3", 00:20:24.033 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev4", 00:20:24.033 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 2048, 00:20:24.033 "data_size": 63488 00:20:24.033 } 00:20:24.033 ] 00:20:24.033 }' 00:20:24.034 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.034 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.602 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.602 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.602 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.602 [2024-11-26 06:30:08.564580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.602 [2024-11-26 06:30:08.564906] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:24.602 [2024-11-26 06:30:08.564987] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:24.602 [2024-11-26 06:30:08.565100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.602 [2024-11-26 06:30:08.580660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:24.602 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.602 06:30:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:24.602 [2024-11-26 06:30:08.591724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.541 "name": "raid_bdev1", 00:20:25.541 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:25.541 "strip_size_kb": 64, 00:20:25.541 "state": "online", 00:20:25.541 "raid_level": "raid5f", 00:20:25.541 "superblock": true, 00:20:25.541 "num_base_bdevs": 4, 00:20:25.541 "num_base_bdevs_discovered": 4, 00:20:25.541 "num_base_bdevs_operational": 4, 00:20:25.541 "process": { 00:20:25.541 "type": "rebuild", 00:20:25.541 "target": "spare", 00:20:25.541 "progress": { 00:20:25.541 "blocks": 19200, 00:20:25.541 "percent": 10 00:20:25.541 } 00:20:25.541 }, 00:20:25.541 "base_bdevs_list": [ 00:20:25.541 { 00:20:25.541 "name": "spare", 00:20:25.541 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:25.541 "is_configured": true, 00:20:25.541 "data_offset": 2048, 00:20:25.541 "data_size": 63488 00:20:25.541 }, 00:20:25.541 { 00:20:25.541 "name": "BaseBdev2", 00:20:25.541 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:25.541 "is_configured": true, 00:20:25.541 "data_offset": 2048, 00:20:25.541 "data_size": 63488 00:20:25.541 }, 00:20:25.541 { 00:20:25.541 "name": "BaseBdev3", 00:20:25.541 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:25.541 "is_configured": true, 00:20:25.541 "data_offset": 2048, 00:20:25.541 "data_size": 63488 00:20:25.541 }, 00:20:25.541 { 00:20:25.541 "name": "BaseBdev4", 00:20:25.541 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:25.541 "is_configured": true, 00:20:25.541 "data_offset": 2048, 00:20:25.541 "data_size": 63488 00:20:25.541 } 00:20:25.541 ] 00:20:25.541 }' 00:20:25.541 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 [2024-11-26 06:30:09.747031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.801 [2024-11-26 06:30:09.803140] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.801 [2024-11-26 06:30:09.803299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.801 [2024-11-26 06:30:09.803323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.801 [2024-11-26 06:30:09.803336] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.801 "name": "raid_bdev1", 00:20:25.801 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:25.801 "strip_size_kb": 64, 00:20:25.801 "state": "online", 00:20:25.801 "raid_level": "raid5f", 00:20:25.801 "superblock": true, 00:20:25.801 "num_base_bdevs": 4, 00:20:25.801 "num_base_bdevs_discovered": 3, 00:20:25.801 "num_base_bdevs_operational": 3, 00:20:25.801 "base_bdevs_list": [ 00:20:25.801 { 00:20:25.801 "name": null, 00:20:25.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.801 "is_configured": false, 00:20:25.801 "data_offset": 0, 00:20:25.801 "data_size": 63488 00:20:25.801 }, 00:20:25.801 { 00:20:25.801 "name": "BaseBdev2", 00:20:25.801 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:25.801 "is_configured": true, 00:20:25.801 "data_offset": 2048, 00:20:25.801 "data_size": 63488 00:20:25.801 }, 00:20:25.801 { 00:20:25.801 "name": "BaseBdev3", 00:20:25.801 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:25.801 "is_configured": true, 00:20:25.801 "data_offset": 2048, 00:20:25.801 "data_size": 63488 00:20:25.801 }, 00:20:25.801 { 00:20:25.801 "name": "BaseBdev4", 00:20:25.801 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:25.801 "is_configured": true, 00:20:25.801 "data_offset": 2048, 00:20:25.801 "data_size": 63488 00:20:25.801 } 00:20:25.801 ] 00:20:25.801 }' 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.801 06:30:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.370 06:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:26.370 06:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.370 06:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.370 [2024-11-26 06:30:10.332353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:26.370 [2024-11-26 06:30:10.332450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.370 [2024-11-26 06:30:10.332489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:26.370 [2024-11-26 06:30:10.332504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.370 [2024-11-26 06:30:10.333166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.370 [2024-11-26 06:30:10.333241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:26.370 [2024-11-26 06:30:10.333407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:26.370 [2024-11-26 06:30:10.333431] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:26.370 [2024-11-26 06:30:10.333444] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:26.370 [2024-11-26 06:30:10.333486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.370 spare 00:20:26.370 [2024-11-26 06:30:10.350345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:26.370 06:30:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.370 06:30:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:26.370 [2024-11-26 06:30:10.361110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.309 "name": "raid_bdev1", 00:20:27.309 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:27.309 "strip_size_kb": 64, 00:20:27.309 "state": "online", 00:20:27.309 "raid_level": "raid5f", 00:20:27.309 "superblock": true, 00:20:27.309 "num_base_bdevs": 4, 00:20:27.309 "num_base_bdevs_discovered": 4, 00:20:27.309 "num_base_bdevs_operational": 4, 00:20:27.309 "process": { 00:20:27.309 "type": "rebuild", 00:20:27.309 "target": "spare", 00:20:27.309 "progress": { 00:20:27.309 "blocks": 17280, 00:20:27.309 "percent": 9 00:20:27.309 } 00:20:27.309 }, 00:20:27.309 "base_bdevs_list": [ 00:20:27.309 { 00:20:27.309 "name": "spare", 00:20:27.309 "uuid": "2a1441f1-5e94-5c1d-abb2-5664b58c8eb7", 00:20:27.309 "is_configured": true, 00:20:27.309 "data_offset": 2048, 00:20:27.309 "data_size": 63488 00:20:27.309 }, 00:20:27.309 { 00:20:27.309 "name": "BaseBdev2", 00:20:27.309 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:27.309 "is_configured": true, 00:20:27.309 "data_offset": 2048, 00:20:27.309 "data_size": 63488 00:20:27.309 }, 00:20:27.309 { 00:20:27.309 "name": "BaseBdev3", 00:20:27.309 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:27.309 "is_configured": true, 00:20:27.309 "data_offset": 2048, 00:20:27.309 "data_size": 63488 00:20:27.309 }, 00:20:27.309 { 00:20:27.309 "name": "BaseBdev4", 00:20:27.309 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:27.309 "is_configured": true, 00:20:27.309 "data_offset": 2048, 00:20:27.309 "data_size": 63488 00:20:27.309 } 00:20:27.309 ] 00:20:27.309 }' 00:20:27.309 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.568 [2024-11-26 06:30:11.508901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:27.568 [2024-11-26 06:30:11.572930] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:27.568 [2024-11-26 06:30:11.573012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.568 [2024-11-26 06:30:11.573036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:27.568 [2024-11-26 06:30:11.573045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.568 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.569 "name": "raid_bdev1", 00:20:27.569 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:27.569 "strip_size_kb": 64, 00:20:27.569 "state": "online", 00:20:27.569 "raid_level": "raid5f", 00:20:27.569 "superblock": true, 00:20:27.569 "num_base_bdevs": 4, 00:20:27.569 "num_base_bdevs_discovered": 3, 00:20:27.569 "num_base_bdevs_operational": 3, 00:20:27.569 "base_bdevs_list": [ 00:20:27.569 { 00:20:27.569 "name": null, 00:20:27.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.569 "is_configured": false, 00:20:27.569 "data_offset": 0, 00:20:27.569 "data_size": 63488 00:20:27.569 }, 00:20:27.569 { 00:20:27.569 "name": "BaseBdev2", 00:20:27.569 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:27.569 "is_configured": true, 00:20:27.569 "data_offset": 2048, 00:20:27.569 "data_size": 63488 00:20:27.569 }, 00:20:27.569 { 00:20:27.569 "name": "BaseBdev3", 00:20:27.569 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:27.569 "is_configured": true, 00:20:27.569 "data_offset": 2048, 00:20:27.569 "data_size": 63488 00:20:27.569 }, 00:20:27.569 { 00:20:27.569 "name": "BaseBdev4", 00:20:27.569 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:27.569 "is_configured": true, 00:20:27.569 "data_offset": 2048, 00:20:27.569 "data_size": 63488 00:20:27.569 } 00:20:27.569 ] 00:20:27.569 }' 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.569 06:30:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.143 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.143 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.143 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.143 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.143 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.143 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.144 "name": "raid_bdev1", 00:20:28.144 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:28.144 "strip_size_kb": 64, 00:20:28.144 "state": "online", 00:20:28.144 "raid_level": "raid5f", 00:20:28.144 "superblock": true, 00:20:28.144 "num_base_bdevs": 4, 00:20:28.144 "num_base_bdevs_discovered": 3, 00:20:28.144 "num_base_bdevs_operational": 3, 00:20:28.144 "base_bdevs_list": [ 00:20:28.144 { 00:20:28.144 "name": null, 00:20:28.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.144 "is_configured": false, 00:20:28.144 "data_offset": 0, 00:20:28.144 "data_size": 63488 00:20:28.144 }, 00:20:28.144 { 00:20:28.144 "name": "BaseBdev2", 00:20:28.144 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:28.144 "is_configured": true, 00:20:28.144 "data_offset": 2048, 00:20:28.144 "data_size": 63488 00:20:28.144 }, 00:20:28.144 { 00:20:28.144 "name": "BaseBdev3", 00:20:28.144 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:28.144 "is_configured": true, 00:20:28.144 "data_offset": 2048, 00:20:28.144 "data_size": 63488 00:20:28.144 }, 00:20:28.144 { 00:20:28.144 "name": "BaseBdev4", 00:20:28.144 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:28.144 "is_configured": true, 00:20:28.144 "data_offset": 2048, 00:20:28.144 "data_size": 63488 00:20:28.144 } 00:20:28.144 ] 00:20:28.144 }' 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.144 [2024-11-26 06:30:12.174911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:28.144 [2024-11-26 06:30:12.175060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.144 [2024-11-26 06:30:12.175098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:28.144 [2024-11-26 06:30:12.175111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.144 [2024-11-26 06:30:12.175773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.144 [2024-11-26 06:30:12.175797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:28.144 [2024-11-26 06:30:12.175912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:28.144 [2024-11-26 06:30:12.175932] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:28.144 [2024-11-26 06:30:12.175946] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:28.144 [2024-11-26 06:30:12.175960] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:28.144 BaseBdev1 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.144 06:30:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:29.092 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:29.092 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.093 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.352 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.352 "name": "raid_bdev1", 00:20:29.352 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:29.352 "strip_size_kb": 64, 00:20:29.352 "state": "online", 00:20:29.352 "raid_level": "raid5f", 00:20:29.352 "superblock": true, 00:20:29.352 "num_base_bdevs": 4, 00:20:29.352 "num_base_bdevs_discovered": 3, 00:20:29.352 "num_base_bdevs_operational": 3, 00:20:29.352 "base_bdevs_list": [ 00:20:29.352 { 00:20:29.352 "name": null, 00:20:29.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.352 "is_configured": false, 00:20:29.352 "data_offset": 0, 00:20:29.352 "data_size": 63488 00:20:29.352 }, 00:20:29.352 { 00:20:29.352 "name": "BaseBdev2", 00:20:29.352 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:29.352 "is_configured": true, 00:20:29.352 "data_offset": 2048, 00:20:29.352 "data_size": 63488 00:20:29.352 }, 00:20:29.352 { 00:20:29.352 "name": "BaseBdev3", 00:20:29.352 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:29.352 "is_configured": true, 00:20:29.352 "data_offset": 2048, 00:20:29.352 "data_size": 63488 00:20:29.352 }, 00:20:29.352 { 00:20:29.352 "name": "BaseBdev4", 00:20:29.352 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:29.352 "is_configured": true, 00:20:29.352 "data_offset": 2048, 00:20:29.352 "data_size": 63488 00:20:29.352 } 00:20:29.352 ] 00:20:29.352 }' 00:20:29.352 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.352 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.612 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.612 "name": "raid_bdev1", 00:20:29.612 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:29.612 "strip_size_kb": 64, 00:20:29.612 "state": "online", 00:20:29.612 "raid_level": "raid5f", 00:20:29.612 "superblock": true, 00:20:29.612 "num_base_bdevs": 4, 00:20:29.612 "num_base_bdevs_discovered": 3, 00:20:29.612 "num_base_bdevs_operational": 3, 00:20:29.612 "base_bdevs_list": [ 00:20:29.612 { 00:20:29.612 "name": null, 00:20:29.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.612 "is_configured": false, 00:20:29.612 "data_offset": 0, 00:20:29.612 "data_size": 63488 00:20:29.612 }, 00:20:29.612 { 00:20:29.612 "name": "BaseBdev2", 00:20:29.612 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:29.612 "is_configured": true, 00:20:29.612 "data_offset": 2048, 00:20:29.612 "data_size": 63488 00:20:29.612 }, 00:20:29.612 { 00:20:29.612 "name": "BaseBdev3", 00:20:29.612 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:29.612 "is_configured": true, 00:20:29.612 "data_offset": 2048, 00:20:29.612 "data_size": 63488 00:20:29.612 }, 00:20:29.612 { 00:20:29.613 "name": "BaseBdev4", 00:20:29.613 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:29.613 "is_configured": true, 00:20:29.613 "data_offset": 2048, 00:20:29.613 "data_size": 63488 00:20:29.613 } 00:20:29.613 ] 00:20:29.613 }' 00:20:29.613 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.613 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.613 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.872 [2024-11-26 06:30:13.796327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.872 [2024-11-26 06:30:13.796589] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:29.872 [2024-11-26 06:30:13.796617] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:29.872 request: 00:20:29.872 { 00:20:29.872 "base_bdev": "BaseBdev1", 00:20:29.872 "raid_bdev": "raid_bdev1", 00:20:29.872 "method": "bdev_raid_add_base_bdev", 00:20:29.872 "req_id": 1 00:20:29.872 } 00:20:29.872 Got JSON-RPC error response 00:20:29.872 response: 00:20:29.872 { 00:20:29.872 "code": -22, 00:20:29.872 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:29.872 } 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:29.872 06:30:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:30.811 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.812 "name": "raid_bdev1", 00:20:30.812 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:30.812 "strip_size_kb": 64, 00:20:30.812 "state": "online", 00:20:30.812 "raid_level": "raid5f", 00:20:30.812 "superblock": true, 00:20:30.812 "num_base_bdevs": 4, 00:20:30.812 "num_base_bdevs_discovered": 3, 00:20:30.812 "num_base_bdevs_operational": 3, 00:20:30.812 "base_bdevs_list": [ 00:20:30.812 { 00:20:30.812 "name": null, 00:20:30.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.812 "is_configured": false, 00:20:30.812 "data_offset": 0, 00:20:30.812 "data_size": 63488 00:20:30.812 }, 00:20:30.812 { 00:20:30.812 "name": "BaseBdev2", 00:20:30.812 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:30.812 "is_configured": true, 00:20:30.812 "data_offset": 2048, 00:20:30.812 "data_size": 63488 00:20:30.812 }, 00:20:30.812 { 00:20:30.812 "name": "BaseBdev3", 00:20:30.812 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:30.812 "is_configured": true, 00:20:30.812 "data_offset": 2048, 00:20:30.812 "data_size": 63488 00:20:30.812 }, 00:20:30.812 { 00:20:30.812 "name": "BaseBdev4", 00:20:30.812 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:30.812 "is_configured": true, 00:20:30.812 "data_offset": 2048, 00:20:30.812 "data_size": 63488 00:20:30.812 } 00:20:30.812 ] 00:20:30.812 }' 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.812 06:30:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.071 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:31.071 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.071 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:31.071 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:31.071 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.330 "name": "raid_bdev1", 00:20:31.330 "uuid": "3adf285b-6cf7-43ef-99b7-0d0543ee237e", 00:20:31.330 "strip_size_kb": 64, 00:20:31.330 "state": "online", 00:20:31.330 "raid_level": "raid5f", 00:20:31.330 "superblock": true, 00:20:31.330 "num_base_bdevs": 4, 00:20:31.330 "num_base_bdevs_discovered": 3, 00:20:31.330 "num_base_bdevs_operational": 3, 00:20:31.330 "base_bdevs_list": [ 00:20:31.330 { 00:20:31.330 "name": null, 00:20:31.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.330 "is_configured": false, 00:20:31.330 "data_offset": 0, 00:20:31.330 "data_size": 63488 00:20:31.330 }, 00:20:31.330 { 00:20:31.330 "name": "BaseBdev2", 00:20:31.330 "uuid": "ca2c5958-0022-5453-9fd7-f0d5e91f8b8c", 00:20:31.330 "is_configured": true, 00:20:31.330 "data_offset": 2048, 00:20:31.330 "data_size": 63488 00:20:31.330 }, 00:20:31.330 { 00:20:31.330 "name": "BaseBdev3", 00:20:31.330 "uuid": "81595d90-97e3-5e97-9da4-19a7fc63e6eb", 00:20:31.330 "is_configured": true, 00:20:31.330 "data_offset": 2048, 00:20:31.330 "data_size": 63488 00:20:31.330 }, 00:20:31.330 { 00:20:31.330 "name": "BaseBdev4", 00:20:31.330 "uuid": "e4d12cdc-c702-59f9-a4cd-eb56a3166fb5", 00:20:31.330 "is_configured": true, 00:20:31.330 "data_offset": 2048, 00:20:31.330 "data_size": 63488 00:20:31.330 } 00:20:31.330 ] 00:20:31.330 }' 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85676 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85676 ']' 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85676 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.330 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85676 00:20:31.331 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.331 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.331 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85676' 00:20:31.331 killing process with pid 85676 00:20:31.331 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85676 00:20:31.331 Received shutdown signal, test time was about 60.000000 seconds 00:20:31.331 00:20:31.331 Latency(us) 00:20:31.331 [2024-11-26T06:30:15.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.331 [2024-11-26T06:30:15.468Z] =================================================================================================================== 00:20:31.331 [2024-11-26T06:30:15.468Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:31.331 [2024-11-26 06:30:15.377765] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:31.331 06:30:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85676 00:20:31.331 [2024-11-26 06:30:15.377924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.331 [2024-11-26 06:30:15.378019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.331 [2024-11-26 06:30:15.378063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:31.901 [2024-11-26 06:30:15.924368] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.380 ************************************ 00:20:33.380 END TEST raid5f_rebuild_test_sb 00:20:33.380 ************************************ 00:20:33.380 06:30:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:33.380 00:20:33.380 real 0m27.470s 00:20:33.380 user 0m34.224s 00:20:33.380 sys 0m3.212s 00:20:33.380 06:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.380 06:30:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.380 06:30:17 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:33.380 06:30:17 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:33.380 06:30:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:33.380 06:30:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.380 06:30:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.380 ************************************ 00:20:33.380 START TEST raid_state_function_test_sb_4k 00:20:33.380 ************************************ 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86492 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86492' 00:20:33.380 Process raid pid: 86492 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86492 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86492 ']' 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.380 06:30:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.380 [2024-11-26 06:30:17.338773] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:20:33.380 [2024-11-26 06:30:17.339009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.641 [2024-11-26 06:30:17.518176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.641 [2024-11-26 06:30:17.658368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.901 [2024-11-26 06:30:17.909944] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.901 [2024-11-26 06:30:17.910041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.162 [2024-11-26 06:30:18.171016] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.162 [2024-11-26 06:30:18.171082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.162 [2024-11-26 06:30:18.171093] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.162 [2024-11-26 06:30:18.171104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.162 "name": "Existed_Raid", 00:20:34.162 "uuid": "d975711c-794c-4510-be9a-d4fa23cd0189", 00:20:34.162 "strip_size_kb": 0, 00:20:34.162 "state": "configuring", 00:20:34.162 "raid_level": "raid1", 00:20:34.162 "superblock": true, 00:20:34.162 "num_base_bdevs": 2, 00:20:34.162 "num_base_bdevs_discovered": 0, 00:20:34.162 "num_base_bdevs_operational": 2, 00:20:34.162 "base_bdevs_list": [ 00:20:34.162 { 00:20:34.162 "name": "BaseBdev1", 00:20:34.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.162 "is_configured": false, 00:20:34.162 "data_offset": 0, 00:20:34.162 "data_size": 0 00:20:34.162 }, 00:20:34.162 { 00:20:34.162 "name": "BaseBdev2", 00:20:34.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.162 "is_configured": false, 00:20:34.162 "data_offset": 0, 00:20:34.162 "data_size": 0 00:20:34.162 } 00:20:34.162 ] 00:20:34.162 }' 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.162 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 [2024-11-26 06:30:18.610246] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:34.758 [2024-11-26 06:30:18.610346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 [2024-11-26 06:30:18.622197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:34.758 [2024-11-26 06:30:18.622280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:34.758 [2024-11-26 06:30:18.622310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:34.758 [2024-11-26 06:30:18.622337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 [2024-11-26 06:30:18.675970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.758 BaseBdev1 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 [ 00:20:34.758 { 00:20:34.758 "name": "BaseBdev1", 00:20:34.758 "aliases": [ 00:20:34.758 "204ea2c9-514b-4dc2-9f90-d43830494144" 00:20:34.758 ], 00:20:34.758 "product_name": "Malloc disk", 00:20:34.758 "block_size": 4096, 00:20:34.758 "num_blocks": 8192, 00:20:34.758 "uuid": "204ea2c9-514b-4dc2-9f90-d43830494144", 00:20:34.758 "assigned_rate_limits": { 00:20:34.758 "rw_ios_per_sec": 0, 00:20:34.758 "rw_mbytes_per_sec": 0, 00:20:34.758 "r_mbytes_per_sec": 0, 00:20:34.758 "w_mbytes_per_sec": 0 00:20:34.758 }, 00:20:34.758 "claimed": true, 00:20:34.758 "claim_type": "exclusive_write", 00:20:34.758 "zoned": false, 00:20:34.758 "supported_io_types": { 00:20:34.758 "read": true, 00:20:34.758 "write": true, 00:20:34.758 "unmap": true, 00:20:34.758 "flush": true, 00:20:34.758 "reset": true, 00:20:34.758 "nvme_admin": false, 00:20:34.758 "nvme_io": false, 00:20:34.758 "nvme_io_md": false, 00:20:34.758 "write_zeroes": true, 00:20:34.758 "zcopy": true, 00:20:34.758 "get_zone_info": false, 00:20:34.758 "zone_management": false, 00:20:34.758 "zone_append": false, 00:20:34.758 "compare": false, 00:20:34.758 "compare_and_write": false, 00:20:34.758 "abort": true, 00:20:34.758 "seek_hole": false, 00:20:34.758 "seek_data": false, 00:20:34.758 "copy": true, 00:20:34.758 "nvme_iov_md": false 00:20:34.758 }, 00:20:34.758 "memory_domains": [ 00:20:34.758 { 00:20:34.758 "dma_device_id": "system", 00:20:34.758 "dma_device_type": 1 00:20:34.758 }, 00:20:34.758 { 00:20:34.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.758 "dma_device_type": 2 00:20:34.758 } 00:20:34.758 ], 00:20:34.758 "driver_specific": {} 00:20:34.758 } 00:20:34.758 ] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.758 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.758 "name": "Existed_Raid", 00:20:34.758 "uuid": "93b12323-c7ff-424e-8d9a-154b003b2ab5", 00:20:34.758 "strip_size_kb": 0, 00:20:34.758 "state": "configuring", 00:20:34.758 "raid_level": "raid1", 00:20:34.758 "superblock": true, 00:20:34.758 "num_base_bdevs": 2, 00:20:34.758 "num_base_bdevs_discovered": 1, 00:20:34.758 "num_base_bdevs_operational": 2, 00:20:34.758 "base_bdevs_list": [ 00:20:34.758 { 00:20:34.758 "name": "BaseBdev1", 00:20:34.758 "uuid": "204ea2c9-514b-4dc2-9f90-d43830494144", 00:20:34.758 "is_configured": true, 00:20:34.758 "data_offset": 256, 00:20:34.758 "data_size": 7936 00:20:34.758 }, 00:20:34.758 { 00:20:34.758 "name": "BaseBdev2", 00:20:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.758 "is_configured": false, 00:20:34.758 "data_offset": 0, 00:20:34.759 "data_size": 0 00:20:34.759 } 00:20:34.759 ] 00:20:34.759 }' 00:20:34.759 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.759 06:30:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.329 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.330 [2024-11-26 06:30:19.187150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:35.330 [2024-11-26 06:30:19.187214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.330 [2024-11-26 06:30:19.195177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:35.330 [2024-11-26 06:30:19.197350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.330 [2024-11-26 06:30:19.197390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.330 "name": "Existed_Raid", 00:20:35.330 "uuid": "9f204366-ab15-4fe5-9945-008b4161551f", 00:20:35.330 "strip_size_kb": 0, 00:20:35.330 "state": "configuring", 00:20:35.330 "raid_level": "raid1", 00:20:35.330 "superblock": true, 00:20:35.330 "num_base_bdevs": 2, 00:20:35.330 "num_base_bdevs_discovered": 1, 00:20:35.330 "num_base_bdevs_operational": 2, 00:20:35.330 "base_bdevs_list": [ 00:20:35.330 { 00:20:35.330 "name": "BaseBdev1", 00:20:35.330 "uuid": "204ea2c9-514b-4dc2-9f90-d43830494144", 00:20:35.330 "is_configured": true, 00:20:35.330 "data_offset": 256, 00:20:35.330 "data_size": 7936 00:20:35.330 }, 00:20:35.330 { 00:20:35.330 "name": "BaseBdev2", 00:20:35.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.330 "is_configured": false, 00:20:35.330 "data_offset": 0, 00:20:35.330 "data_size": 0 00:20:35.330 } 00:20:35.330 ] 00:20:35.330 }' 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.330 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.590 [2024-11-26 06:30:19.662998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.590 [2024-11-26 06:30:19.663471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:35.590 [2024-11-26 06:30:19.663536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:35.590 [2024-11-26 06:30:19.663901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:35.590 BaseBdev2 00:20:35.590 [2024-11-26 06:30:19.664138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:35.590 [2024-11-26 06:30:19.664155] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:35.590 [2024-11-26 06:30:19.664328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.590 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.590 [ 00:20:35.590 { 00:20:35.590 "name": "BaseBdev2", 00:20:35.590 "aliases": [ 00:20:35.590 "7c1ec998-30e7-4b51-84a2-f9acb1703ce5" 00:20:35.590 ], 00:20:35.590 "product_name": "Malloc disk", 00:20:35.590 "block_size": 4096, 00:20:35.590 "num_blocks": 8192, 00:20:35.590 "uuid": "7c1ec998-30e7-4b51-84a2-f9acb1703ce5", 00:20:35.590 "assigned_rate_limits": { 00:20:35.590 "rw_ios_per_sec": 0, 00:20:35.590 "rw_mbytes_per_sec": 0, 00:20:35.590 "r_mbytes_per_sec": 0, 00:20:35.590 "w_mbytes_per_sec": 0 00:20:35.590 }, 00:20:35.590 "claimed": true, 00:20:35.590 "claim_type": "exclusive_write", 00:20:35.590 "zoned": false, 00:20:35.590 "supported_io_types": { 00:20:35.590 "read": true, 00:20:35.590 "write": true, 00:20:35.590 "unmap": true, 00:20:35.590 "flush": true, 00:20:35.590 "reset": true, 00:20:35.590 "nvme_admin": false, 00:20:35.590 "nvme_io": false, 00:20:35.590 "nvme_io_md": false, 00:20:35.590 "write_zeroes": true, 00:20:35.590 "zcopy": true, 00:20:35.590 "get_zone_info": false, 00:20:35.590 "zone_management": false, 00:20:35.590 "zone_append": false, 00:20:35.590 "compare": false, 00:20:35.590 "compare_and_write": false, 00:20:35.590 "abort": true, 00:20:35.590 "seek_hole": false, 00:20:35.590 "seek_data": false, 00:20:35.590 "copy": true, 00:20:35.590 "nvme_iov_md": false 00:20:35.590 }, 00:20:35.590 "memory_domains": [ 00:20:35.590 { 00:20:35.590 "dma_device_id": "system", 00:20:35.590 "dma_device_type": 1 00:20:35.590 }, 00:20:35.590 { 00:20:35.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.591 "dma_device_type": 2 00:20:35.591 } 00:20:35.591 ], 00:20:35.591 "driver_specific": {} 00:20:35.591 } 00:20:35.591 ] 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.591 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:35.851 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.851 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.851 "name": "Existed_Raid", 00:20:35.851 "uuid": "9f204366-ab15-4fe5-9945-008b4161551f", 00:20:35.851 "strip_size_kb": 0, 00:20:35.851 "state": "online", 00:20:35.851 "raid_level": "raid1", 00:20:35.851 "superblock": true, 00:20:35.851 "num_base_bdevs": 2, 00:20:35.851 "num_base_bdevs_discovered": 2, 00:20:35.851 "num_base_bdevs_operational": 2, 00:20:35.851 "base_bdevs_list": [ 00:20:35.851 { 00:20:35.851 "name": "BaseBdev1", 00:20:35.851 "uuid": "204ea2c9-514b-4dc2-9f90-d43830494144", 00:20:35.851 "is_configured": true, 00:20:35.851 "data_offset": 256, 00:20:35.851 "data_size": 7936 00:20:35.851 }, 00:20:35.851 { 00:20:35.851 "name": "BaseBdev2", 00:20:35.851 "uuid": "7c1ec998-30e7-4b51-84a2-f9acb1703ce5", 00:20:35.851 "is_configured": true, 00:20:35.851 "data_offset": 256, 00:20:35.851 "data_size": 7936 00:20:35.851 } 00:20:35.851 ] 00:20:35.851 }' 00:20:35.851 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.851 06:30:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.111 [2024-11-26 06:30:20.138524] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.111 "name": "Existed_Raid", 00:20:36.111 "aliases": [ 00:20:36.111 "9f204366-ab15-4fe5-9945-008b4161551f" 00:20:36.111 ], 00:20:36.111 "product_name": "Raid Volume", 00:20:36.111 "block_size": 4096, 00:20:36.111 "num_blocks": 7936, 00:20:36.111 "uuid": "9f204366-ab15-4fe5-9945-008b4161551f", 00:20:36.111 "assigned_rate_limits": { 00:20:36.111 "rw_ios_per_sec": 0, 00:20:36.111 "rw_mbytes_per_sec": 0, 00:20:36.111 "r_mbytes_per_sec": 0, 00:20:36.111 "w_mbytes_per_sec": 0 00:20:36.111 }, 00:20:36.111 "claimed": false, 00:20:36.111 "zoned": false, 00:20:36.111 "supported_io_types": { 00:20:36.111 "read": true, 00:20:36.111 "write": true, 00:20:36.111 "unmap": false, 00:20:36.111 "flush": false, 00:20:36.111 "reset": true, 00:20:36.111 "nvme_admin": false, 00:20:36.111 "nvme_io": false, 00:20:36.111 "nvme_io_md": false, 00:20:36.111 "write_zeroes": true, 00:20:36.111 "zcopy": false, 00:20:36.111 "get_zone_info": false, 00:20:36.111 "zone_management": false, 00:20:36.111 "zone_append": false, 00:20:36.111 "compare": false, 00:20:36.111 "compare_and_write": false, 00:20:36.111 "abort": false, 00:20:36.111 "seek_hole": false, 00:20:36.111 "seek_data": false, 00:20:36.111 "copy": false, 00:20:36.111 "nvme_iov_md": false 00:20:36.111 }, 00:20:36.111 "memory_domains": [ 00:20:36.111 { 00:20:36.111 "dma_device_id": "system", 00:20:36.111 "dma_device_type": 1 00:20:36.111 }, 00:20:36.111 { 00:20:36.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.111 "dma_device_type": 2 00:20:36.111 }, 00:20:36.111 { 00:20:36.111 "dma_device_id": "system", 00:20:36.111 "dma_device_type": 1 00:20:36.111 }, 00:20:36.111 { 00:20:36.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.111 "dma_device_type": 2 00:20:36.111 } 00:20:36.111 ], 00:20:36.111 "driver_specific": { 00:20:36.111 "raid": { 00:20:36.111 "uuid": "9f204366-ab15-4fe5-9945-008b4161551f", 00:20:36.111 "strip_size_kb": 0, 00:20:36.111 "state": "online", 00:20:36.111 "raid_level": "raid1", 00:20:36.111 "superblock": true, 00:20:36.111 "num_base_bdevs": 2, 00:20:36.111 "num_base_bdevs_discovered": 2, 00:20:36.111 "num_base_bdevs_operational": 2, 00:20:36.111 "base_bdevs_list": [ 00:20:36.111 { 00:20:36.111 "name": "BaseBdev1", 00:20:36.111 "uuid": "204ea2c9-514b-4dc2-9f90-d43830494144", 00:20:36.111 "is_configured": true, 00:20:36.111 "data_offset": 256, 00:20:36.111 "data_size": 7936 00:20:36.111 }, 00:20:36.111 { 00:20:36.111 "name": "BaseBdev2", 00:20:36.111 "uuid": "7c1ec998-30e7-4b51-84a2-f9acb1703ce5", 00:20:36.111 "is_configured": true, 00:20:36.111 "data_offset": 256, 00:20:36.111 "data_size": 7936 00:20:36.111 } 00:20:36.111 ] 00:20:36.111 } 00:20:36.111 } 00:20:36.111 }' 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:36.111 BaseBdev2' 00:20:36.111 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.370 [2024-11-26 06:30:20.353927] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.370 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.630 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.630 "name": "Existed_Raid", 00:20:36.630 "uuid": "9f204366-ab15-4fe5-9945-008b4161551f", 00:20:36.630 "strip_size_kb": 0, 00:20:36.630 "state": "online", 00:20:36.630 "raid_level": "raid1", 00:20:36.630 "superblock": true, 00:20:36.630 "num_base_bdevs": 2, 00:20:36.630 "num_base_bdevs_discovered": 1, 00:20:36.630 "num_base_bdevs_operational": 1, 00:20:36.630 "base_bdevs_list": [ 00:20:36.630 { 00:20:36.630 "name": null, 00:20:36.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.630 "is_configured": false, 00:20:36.630 "data_offset": 0, 00:20:36.630 "data_size": 7936 00:20:36.630 }, 00:20:36.630 { 00:20:36.630 "name": "BaseBdev2", 00:20:36.630 "uuid": "7c1ec998-30e7-4b51-84a2-f9acb1703ce5", 00:20:36.630 "is_configured": true, 00:20:36.630 "data_offset": 256, 00:20:36.630 "data_size": 7936 00:20:36.630 } 00:20:36.630 ] 00:20:36.630 }' 00:20:36.630 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.630 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.891 06:30:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.891 [2024-11-26 06:30:20.986861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:36.891 [2024-11-26 06:30:20.987000] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.151 [2024-11-26 06:30:21.093491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.151 [2024-11-26 06:30:21.093558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.151 [2024-11-26 06:30:21.093584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86492 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86492 ']' 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86492 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86492 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86492' 00:20:37.151 killing process with pid 86492 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86492 00:20:37.151 [2024-11-26 06:30:21.191981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.151 06:30:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86492 00:20:37.151 [2024-11-26 06:30:21.210374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:38.533 ************************************ 00:20:38.533 END TEST raid_state_function_test_sb_4k 00:20:38.533 ************************************ 00:20:38.533 06:30:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:38.533 00:20:38.533 real 0m5.198s 00:20:38.533 user 0m7.339s 00:20:38.533 sys 0m0.972s 00:20:38.533 06:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.533 06:30:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.533 06:30:22 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:38.533 06:30:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:38.533 06:30:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.533 06:30:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:38.533 ************************************ 00:20:38.533 START TEST raid_superblock_test_4k 00:20:38.533 ************************************ 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86743 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86743 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86743 ']' 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.533 06:30:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.533 [2024-11-26 06:30:22.620311] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:20:38.533 [2024-11-26 06:30:22.620960] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86743 ] 00:20:38.793 [2024-11-26 06:30:22.799538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.133 [2024-11-26 06:30:22.938188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.133 [2024-11-26 06:30:23.173422] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.133 [2024-11-26 06:30:23.173495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.393 malloc1 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.393 [2024-11-26 06:30:23.501923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:39.393 [2024-11-26 06:30:23.501998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.393 [2024-11-26 06:30:23.502024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:39.393 [2024-11-26 06:30:23.502035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.393 [2024-11-26 06:30:23.504526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.393 [2024-11-26 06:30:23.504564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:39.393 pt1 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.393 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.653 malloc2 00:20:39.653 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.654 [2024-11-26 06:30:23.567601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.654 [2024-11-26 06:30:23.567662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.654 [2024-11-26 06:30:23.567686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:39.654 [2024-11-26 06:30:23.567695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.654 [2024-11-26 06:30:23.570129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.654 [2024-11-26 06:30:23.570162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:39.654 pt2 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.654 [2024-11-26 06:30:23.579646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:39.654 [2024-11-26 06:30:23.581824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:39.654 [2024-11-26 06:30:23.582000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:39.654 [2024-11-26 06:30:23.582017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:39.654 [2024-11-26 06:30:23.582300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:39.654 [2024-11-26 06:30:23.582492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:39.654 [2024-11-26 06:30:23.582516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:39.654 [2024-11-26 06:30:23.582670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.654 "name": "raid_bdev1", 00:20:39.654 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:39.654 "strip_size_kb": 0, 00:20:39.654 "state": "online", 00:20:39.654 "raid_level": "raid1", 00:20:39.654 "superblock": true, 00:20:39.654 "num_base_bdevs": 2, 00:20:39.654 "num_base_bdevs_discovered": 2, 00:20:39.654 "num_base_bdevs_operational": 2, 00:20:39.654 "base_bdevs_list": [ 00:20:39.654 { 00:20:39.654 "name": "pt1", 00:20:39.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:39.654 "is_configured": true, 00:20:39.654 "data_offset": 256, 00:20:39.654 "data_size": 7936 00:20:39.654 }, 00:20:39.654 { 00:20:39.654 "name": "pt2", 00:20:39.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.654 "is_configured": true, 00:20:39.654 "data_offset": 256, 00:20:39.654 "data_size": 7936 00:20:39.654 } 00:20:39.654 ] 00:20:39.654 }' 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.654 06:30:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:39.914 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:39.915 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.915 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.915 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:39.915 [2024-11-26 06:30:24.039188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:40.175 "name": "raid_bdev1", 00:20:40.175 "aliases": [ 00:20:40.175 "162ae4a7-8f3f-4862-bfd2-841168cf600e" 00:20:40.175 ], 00:20:40.175 "product_name": "Raid Volume", 00:20:40.175 "block_size": 4096, 00:20:40.175 "num_blocks": 7936, 00:20:40.175 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:40.175 "assigned_rate_limits": { 00:20:40.175 "rw_ios_per_sec": 0, 00:20:40.175 "rw_mbytes_per_sec": 0, 00:20:40.175 "r_mbytes_per_sec": 0, 00:20:40.175 "w_mbytes_per_sec": 0 00:20:40.175 }, 00:20:40.175 "claimed": false, 00:20:40.175 "zoned": false, 00:20:40.175 "supported_io_types": { 00:20:40.175 "read": true, 00:20:40.175 "write": true, 00:20:40.175 "unmap": false, 00:20:40.175 "flush": false, 00:20:40.175 "reset": true, 00:20:40.175 "nvme_admin": false, 00:20:40.175 "nvme_io": false, 00:20:40.175 "nvme_io_md": false, 00:20:40.175 "write_zeroes": true, 00:20:40.175 "zcopy": false, 00:20:40.175 "get_zone_info": false, 00:20:40.175 "zone_management": false, 00:20:40.175 "zone_append": false, 00:20:40.175 "compare": false, 00:20:40.175 "compare_and_write": false, 00:20:40.175 "abort": false, 00:20:40.175 "seek_hole": false, 00:20:40.175 "seek_data": false, 00:20:40.175 "copy": false, 00:20:40.175 "nvme_iov_md": false 00:20:40.175 }, 00:20:40.175 "memory_domains": [ 00:20:40.175 { 00:20:40.175 "dma_device_id": "system", 00:20:40.175 "dma_device_type": 1 00:20:40.175 }, 00:20:40.175 { 00:20:40.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.175 "dma_device_type": 2 00:20:40.175 }, 00:20:40.175 { 00:20:40.175 "dma_device_id": "system", 00:20:40.175 "dma_device_type": 1 00:20:40.175 }, 00:20:40.175 { 00:20:40.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.175 "dma_device_type": 2 00:20:40.175 } 00:20:40.175 ], 00:20:40.175 "driver_specific": { 00:20:40.175 "raid": { 00:20:40.175 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:40.175 "strip_size_kb": 0, 00:20:40.175 "state": "online", 00:20:40.175 "raid_level": "raid1", 00:20:40.175 "superblock": true, 00:20:40.175 "num_base_bdevs": 2, 00:20:40.175 "num_base_bdevs_discovered": 2, 00:20:40.175 "num_base_bdevs_operational": 2, 00:20:40.175 "base_bdevs_list": [ 00:20:40.175 { 00:20:40.175 "name": "pt1", 00:20:40.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.175 "is_configured": true, 00:20:40.175 "data_offset": 256, 00:20:40.175 "data_size": 7936 00:20:40.175 }, 00:20:40.175 { 00:20:40.175 "name": "pt2", 00:20:40.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.175 "is_configured": true, 00:20:40.175 "data_offset": 256, 00:20:40.175 "data_size": 7936 00:20:40.175 } 00:20:40.175 ] 00:20:40.175 } 00:20:40.175 } 00:20:40.175 }' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:40.175 pt2' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:40.175 [2024-11-26 06:30:24.258775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.175 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=162ae4a7-8f3f-4862-bfd2-841168cf600e 00:20:40.176 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 162ae4a7-8f3f-4862-bfd2-841168cf600e ']' 00:20:40.176 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:40.176 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.176 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.176 [2024-11-26 06:30:24.302356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.176 [2024-11-26 06:30:24.302387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:40.176 [2024-11-26 06:30:24.302491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.176 [2024-11-26 06:30:24.302560] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.176 [2024-11-26 06:30:24.302577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 [2024-11-26 06:30:24.430188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:40.436 [2024-11-26 06:30:24.432435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:40.436 [2024-11-26 06:30:24.432514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:40.436 [2024-11-26 06:30:24.432591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:40.436 [2024-11-26 06:30:24.432608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:40.436 [2024-11-26 06:30:24.432620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:40.436 request: 00:20:40.436 { 00:20:40.436 "name": "raid_bdev1", 00:20:40.436 "raid_level": "raid1", 00:20:40.436 "base_bdevs": [ 00:20:40.436 "malloc1", 00:20:40.436 "malloc2" 00:20:40.436 ], 00:20:40.436 "superblock": false, 00:20:40.436 "method": "bdev_raid_create", 00:20:40.436 "req_id": 1 00:20:40.436 } 00:20:40.436 Got JSON-RPC error response 00:20:40.436 response: 00:20:40.436 { 00:20:40.436 "code": -17, 00:20:40.436 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:40.436 } 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 [2024-11-26 06:30:24.490033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:40.436 [2024-11-26 06:30:24.490102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.436 [2024-11-26 06:30:24.490122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:40.436 [2024-11-26 06:30:24.490133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.436 [2024-11-26 06:30:24.492688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.436 [2024-11-26 06:30:24.492731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:40.436 [2024-11-26 06:30:24.492818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:40.436 [2024-11-26 06:30:24.492887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:40.436 pt1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.436 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.437 "name": "raid_bdev1", 00:20:40.437 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:40.437 "strip_size_kb": 0, 00:20:40.437 "state": "configuring", 00:20:40.437 "raid_level": "raid1", 00:20:40.437 "superblock": true, 00:20:40.437 "num_base_bdevs": 2, 00:20:40.437 "num_base_bdevs_discovered": 1, 00:20:40.437 "num_base_bdevs_operational": 2, 00:20:40.437 "base_bdevs_list": [ 00:20:40.437 { 00:20:40.437 "name": "pt1", 00:20:40.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.437 "is_configured": true, 00:20:40.437 "data_offset": 256, 00:20:40.437 "data_size": 7936 00:20:40.437 }, 00:20:40.437 { 00:20:40.437 "name": null, 00:20:40.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.437 "is_configured": false, 00:20:40.437 "data_offset": 256, 00:20:40.437 "data_size": 7936 00:20:40.437 } 00:20:40.437 ] 00:20:40.437 }' 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.437 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.007 [2024-11-26 06:30:24.945316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:41.007 [2024-11-26 06:30:24.945402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.007 [2024-11-26 06:30:24.945433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:41.007 [2024-11-26 06:30:24.945449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.007 [2024-11-26 06:30:24.946041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.007 [2024-11-26 06:30:24.946087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:41.007 [2024-11-26 06:30:24.946188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:41.007 [2024-11-26 06:30:24.946242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:41.007 [2024-11-26 06:30:24.946426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:41.007 [2024-11-26 06:30:24.946446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:41.007 [2024-11-26 06:30:24.946735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:41.007 [2024-11-26 06:30:24.946918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:41.007 [2024-11-26 06:30:24.946934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:41.007 [2024-11-26 06:30:24.947130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.007 pt2 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.007 06:30:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.007 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.007 "name": "raid_bdev1", 00:20:41.007 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:41.007 "strip_size_kb": 0, 00:20:41.007 "state": "online", 00:20:41.007 "raid_level": "raid1", 00:20:41.007 "superblock": true, 00:20:41.007 "num_base_bdevs": 2, 00:20:41.007 "num_base_bdevs_discovered": 2, 00:20:41.007 "num_base_bdevs_operational": 2, 00:20:41.007 "base_bdevs_list": [ 00:20:41.007 { 00:20:41.007 "name": "pt1", 00:20:41.007 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.007 "is_configured": true, 00:20:41.007 "data_offset": 256, 00:20:41.007 "data_size": 7936 00:20:41.007 }, 00:20:41.007 { 00:20:41.007 "name": "pt2", 00:20:41.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.007 "is_configured": true, 00:20:41.007 "data_offset": 256, 00:20:41.007 "data_size": 7936 00:20:41.007 } 00:20:41.007 ] 00:20:41.007 }' 00:20:41.007 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.007 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.578 [2024-11-26 06:30:25.440733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:41.578 "name": "raid_bdev1", 00:20:41.578 "aliases": [ 00:20:41.578 "162ae4a7-8f3f-4862-bfd2-841168cf600e" 00:20:41.578 ], 00:20:41.578 "product_name": "Raid Volume", 00:20:41.578 "block_size": 4096, 00:20:41.578 "num_blocks": 7936, 00:20:41.578 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:41.578 "assigned_rate_limits": { 00:20:41.578 "rw_ios_per_sec": 0, 00:20:41.578 "rw_mbytes_per_sec": 0, 00:20:41.578 "r_mbytes_per_sec": 0, 00:20:41.578 "w_mbytes_per_sec": 0 00:20:41.578 }, 00:20:41.578 "claimed": false, 00:20:41.578 "zoned": false, 00:20:41.578 "supported_io_types": { 00:20:41.578 "read": true, 00:20:41.578 "write": true, 00:20:41.578 "unmap": false, 00:20:41.578 "flush": false, 00:20:41.578 "reset": true, 00:20:41.578 "nvme_admin": false, 00:20:41.578 "nvme_io": false, 00:20:41.578 "nvme_io_md": false, 00:20:41.578 "write_zeroes": true, 00:20:41.578 "zcopy": false, 00:20:41.578 "get_zone_info": false, 00:20:41.578 "zone_management": false, 00:20:41.578 "zone_append": false, 00:20:41.578 "compare": false, 00:20:41.578 "compare_and_write": false, 00:20:41.578 "abort": false, 00:20:41.578 "seek_hole": false, 00:20:41.578 "seek_data": false, 00:20:41.578 "copy": false, 00:20:41.578 "nvme_iov_md": false 00:20:41.578 }, 00:20:41.578 "memory_domains": [ 00:20:41.578 { 00:20:41.578 "dma_device_id": "system", 00:20:41.578 "dma_device_type": 1 00:20:41.578 }, 00:20:41.578 { 00:20:41.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.578 "dma_device_type": 2 00:20:41.578 }, 00:20:41.578 { 00:20:41.578 "dma_device_id": "system", 00:20:41.578 "dma_device_type": 1 00:20:41.578 }, 00:20:41.578 { 00:20:41.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.578 "dma_device_type": 2 00:20:41.578 } 00:20:41.578 ], 00:20:41.578 "driver_specific": { 00:20:41.578 "raid": { 00:20:41.578 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:41.578 "strip_size_kb": 0, 00:20:41.578 "state": "online", 00:20:41.578 "raid_level": "raid1", 00:20:41.578 "superblock": true, 00:20:41.578 "num_base_bdevs": 2, 00:20:41.578 "num_base_bdevs_discovered": 2, 00:20:41.578 "num_base_bdevs_operational": 2, 00:20:41.578 "base_bdevs_list": [ 00:20:41.578 { 00:20:41.578 "name": "pt1", 00:20:41.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.578 "is_configured": true, 00:20:41.578 "data_offset": 256, 00:20:41.578 "data_size": 7936 00:20:41.578 }, 00:20:41.578 { 00:20:41.578 "name": "pt2", 00:20:41.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.578 "is_configured": true, 00:20:41.578 "data_offset": 256, 00:20:41.578 "data_size": 7936 00:20:41.578 } 00:20:41.578 ] 00:20:41.578 } 00:20:41.578 } 00:20:41.578 }' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:41.578 pt2' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.578 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.578 [2024-11-26 06:30:25.656339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 162ae4a7-8f3f-4862-bfd2-841168cf600e '!=' 162ae4a7-8f3f-4862-bfd2-841168cf600e ']' 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.579 [2024-11-26 06:30:25.704067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:41.579 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.839 "name": "raid_bdev1", 00:20:41.839 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:41.839 "strip_size_kb": 0, 00:20:41.839 "state": "online", 00:20:41.839 "raid_level": "raid1", 00:20:41.839 "superblock": true, 00:20:41.839 "num_base_bdevs": 2, 00:20:41.839 "num_base_bdevs_discovered": 1, 00:20:41.839 "num_base_bdevs_operational": 1, 00:20:41.839 "base_bdevs_list": [ 00:20:41.839 { 00:20:41.839 "name": null, 00:20:41.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.839 "is_configured": false, 00:20:41.839 "data_offset": 0, 00:20:41.839 "data_size": 7936 00:20:41.839 }, 00:20:41.839 { 00:20:41.839 "name": "pt2", 00:20:41.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.839 "is_configured": true, 00:20:41.839 "data_offset": 256, 00:20:41.839 "data_size": 7936 00:20:41.839 } 00:20:41.839 ] 00:20:41.839 }' 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.839 06:30:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.099 [2024-11-26 06:30:26.119306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.099 [2024-11-26 06:30:26.119339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.099 [2024-11-26 06:30:26.119441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.099 [2024-11-26 06:30:26.119494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.099 [2024-11-26 06:30:26.119508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.099 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.099 [2024-11-26 06:30:26.183175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:42.099 [2024-11-26 06:30:26.183251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.099 [2024-11-26 06:30:26.183273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:42.099 [2024-11-26 06:30:26.183286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.099 [2024-11-26 06:30:26.185965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.099 [2024-11-26 06:30:26.186007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:42.099 [2024-11-26 06:30:26.186117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:42.099 [2024-11-26 06:30:26.186174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.099 [2024-11-26 06:30:26.186340] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:42.100 [2024-11-26 06:30:26.186361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:42.100 [2024-11-26 06:30:26.186622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:42.100 [2024-11-26 06:30:26.186803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:42.100 [2024-11-26 06:30:26.186821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:42.100 [2024-11-26 06:30:26.186989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.100 pt2 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.100 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.359 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.359 "name": "raid_bdev1", 00:20:42.359 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:42.359 "strip_size_kb": 0, 00:20:42.359 "state": "online", 00:20:42.359 "raid_level": "raid1", 00:20:42.359 "superblock": true, 00:20:42.359 "num_base_bdevs": 2, 00:20:42.359 "num_base_bdevs_discovered": 1, 00:20:42.359 "num_base_bdevs_operational": 1, 00:20:42.359 "base_bdevs_list": [ 00:20:42.359 { 00:20:42.359 "name": null, 00:20:42.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.359 "is_configured": false, 00:20:42.359 "data_offset": 256, 00:20:42.359 "data_size": 7936 00:20:42.359 }, 00:20:42.359 { 00:20:42.359 "name": "pt2", 00:20:42.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.359 "is_configured": true, 00:20:42.359 "data_offset": 256, 00:20:42.359 "data_size": 7936 00:20:42.359 } 00:20:42.359 ] 00:20:42.359 }' 00:20:42.359 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.359 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.619 [2024-11-26 06:30:26.618416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.619 [2024-11-26 06:30:26.618457] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.619 [2024-11-26 06:30:26.618571] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.619 [2024-11-26 06:30:26.618629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.619 [2024-11-26 06:30:26.618640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.619 [2024-11-26 06:30:26.682335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:42.619 [2024-11-26 06:30:26.682412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.619 [2024-11-26 06:30:26.682438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:42.619 [2024-11-26 06:30:26.682449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.619 [2024-11-26 06:30:26.685308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.619 [2024-11-26 06:30:26.685352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:42.619 [2024-11-26 06:30:26.685465] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:42.619 [2024-11-26 06:30:26.685532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:42.619 [2024-11-26 06:30:26.685762] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:42.619 [2024-11-26 06:30:26.685783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.619 [2024-11-26 06:30:26.685803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:42.619 [2024-11-26 06:30:26.685888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.619 [2024-11-26 06:30:26.686000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:42.619 [2024-11-26 06:30:26.686023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:42.619 [2024-11-26 06:30:26.686346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:42.619 [2024-11-26 06:30:26.686529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:42.619 [2024-11-26 06:30:26.686552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:42.619 [2024-11-26 06:30:26.686777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.619 pt1 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.619 "name": "raid_bdev1", 00:20:42.619 "uuid": "162ae4a7-8f3f-4862-bfd2-841168cf600e", 00:20:42.619 "strip_size_kb": 0, 00:20:42.619 "state": "online", 00:20:42.619 "raid_level": "raid1", 00:20:42.619 "superblock": true, 00:20:42.619 "num_base_bdevs": 2, 00:20:42.619 "num_base_bdevs_discovered": 1, 00:20:42.619 "num_base_bdevs_operational": 1, 00:20:42.619 "base_bdevs_list": [ 00:20:42.619 { 00:20:42.619 "name": null, 00:20:42.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.619 "is_configured": false, 00:20:42.619 "data_offset": 256, 00:20:42.619 "data_size": 7936 00:20:42.619 }, 00:20:42.619 { 00:20:42.619 "name": "pt2", 00:20:42.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.619 "is_configured": true, 00:20:42.619 "data_offset": 256, 00:20:42.619 "data_size": 7936 00:20:42.619 } 00:20:42.619 ] 00:20:42.619 }' 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.619 06:30:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.188 [2024-11-26 06:30:27.126228] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 162ae4a7-8f3f-4862-bfd2-841168cf600e '!=' 162ae4a7-8f3f-4862-bfd2-841168cf600e ']' 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86743 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86743 ']' 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86743 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86743 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.188 killing process with pid 86743 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86743' 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86743 00:20:43.188 [2024-11-26 06:30:27.202360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:43.188 [2024-11-26 06:30:27.202485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.188 06:30:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86743 00:20:43.188 [2024-11-26 06:30:27.202553] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.188 [2024-11-26 06:30:27.202573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:43.447 [2024-11-26 06:30:27.432635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:44.827 06:30:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:44.827 00:20:44.827 real 0m6.133s 00:20:44.827 user 0m9.065s 00:20:44.827 sys 0m1.206s 00:20:44.827 06:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.827 06:30:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.827 ************************************ 00:20:44.827 END TEST raid_superblock_test_4k 00:20:44.827 ************************************ 00:20:44.827 06:30:28 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:44.827 06:30:28 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:44.827 06:30:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:44.827 06:30:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.827 06:30:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:44.827 ************************************ 00:20:44.827 START TEST raid_rebuild_test_sb_4k 00:20:44.827 ************************************ 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=87067 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 87067 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87067 ']' 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.827 06:30:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.827 [2024-11-26 06:30:28.817432] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:20:44.827 [2024-11-26 06:30:28.818414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87067 ] 00:20:44.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:44.827 Zero copy mechanism will not be used. 00:20:45.086 [2024-11-26 06:30:29.022975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.086 [2024-11-26 06:30:29.163037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.351 [2024-11-26 06:30:29.397883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.351 [2024-11-26 06:30:29.397956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.628 BaseBdev1_malloc 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.628 [2024-11-26 06:30:29.697978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:45.628 [2024-11-26 06:30:29.698069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.628 [2024-11-26 06:30:29.698096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:45.628 [2024-11-26 06:30:29.698108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.628 [2024-11-26 06:30:29.700569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.628 [2024-11-26 06:30:29.700606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:45.628 BaseBdev1 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.628 BaseBdev2_malloc 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.628 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.628 [2024-11-26 06:30:29.758609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:45.628 [2024-11-26 06:30:29.758693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.628 [2024-11-26 06:30:29.758714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:45.628 [2024-11-26 06:30:29.758726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.890 [2024-11-26 06:30:29.761063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.890 [2024-11-26 06:30:29.761099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:45.890 BaseBdev2 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.890 spare_malloc 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.890 spare_delay 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.890 [2024-11-26 06:30:29.838602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.890 [2024-11-26 06:30:29.838663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.890 [2024-11-26 06:30:29.838682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:45.890 [2024-11-26 06:30:29.838693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.890 [2024-11-26 06:30:29.841201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.890 [2024-11-26 06:30:29.841241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.890 spare 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.890 [2024-11-26 06:30:29.850620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.890 [2024-11-26 06:30:29.852686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.890 [2024-11-26 06:30:29.852873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:45.890 [2024-11-26 06:30:29.852891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:45.890 [2024-11-26 06:30:29.853160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:45.890 [2024-11-26 06:30:29.853379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:45.890 [2024-11-26 06:30:29.853396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:45.890 [2024-11-26 06:30:29.853569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.890 "name": "raid_bdev1", 00:20:45.890 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:45.890 "strip_size_kb": 0, 00:20:45.890 "state": "online", 00:20:45.890 "raid_level": "raid1", 00:20:45.890 "superblock": true, 00:20:45.890 "num_base_bdevs": 2, 00:20:45.890 "num_base_bdevs_discovered": 2, 00:20:45.890 "num_base_bdevs_operational": 2, 00:20:45.890 "base_bdevs_list": [ 00:20:45.890 { 00:20:45.890 "name": "BaseBdev1", 00:20:45.890 "uuid": "f548cab4-63e3-527d-9571-cf17cde91457", 00:20:45.890 "is_configured": true, 00:20:45.890 "data_offset": 256, 00:20:45.890 "data_size": 7936 00:20:45.890 }, 00:20:45.890 { 00:20:45.890 "name": "BaseBdev2", 00:20:45.890 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:45.890 "is_configured": true, 00:20:45.890 "data_offset": 256, 00:20:45.890 "data_size": 7936 00:20:45.890 } 00:20:45.890 ] 00:20:45.890 }' 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.890 06:30:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:46.460 [2024-11-26 06:30:30.326172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.460 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.461 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:46.721 [2024-11-26 06:30:30.625397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:46.721 /dev/nbd0 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.721 1+0 records in 00:20:46.721 1+0 records out 00:20:46.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424854 s, 9.6 MB/s 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:46.721 06:30:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:47.288 7936+0 records in 00:20:47.288 7936+0 records out 00:20:47.288 32505856 bytes (33 MB, 31 MiB) copied, 0.641059 s, 50.7 MB/s 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:47.288 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.547 [2024-11-26 06:30:31.568345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.547 [2024-11-26 06:30:31.588440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.547 "name": "raid_bdev1", 00:20:47.547 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:47.547 "strip_size_kb": 0, 00:20:47.547 "state": "online", 00:20:47.547 "raid_level": "raid1", 00:20:47.547 "superblock": true, 00:20:47.547 "num_base_bdevs": 2, 00:20:47.547 "num_base_bdevs_discovered": 1, 00:20:47.547 "num_base_bdevs_operational": 1, 00:20:47.547 "base_bdevs_list": [ 00:20:47.547 { 00:20:47.547 "name": null, 00:20:47.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.547 "is_configured": false, 00:20:47.547 "data_offset": 0, 00:20:47.547 "data_size": 7936 00:20:47.547 }, 00:20:47.547 { 00:20:47.547 "name": "BaseBdev2", 00:20:47.547 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:47.547 "is_configured": true, 00:20:47.547 "data_offset": 256, 00:20:47.547 "data_size": 7936 00:20:47.547 } 00:20:47.547 ] 00:20:47.547 }' 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.547 06:30:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.115 06:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:48.115 06:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.115 06:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.115 [2024-11-26 06:30:32.059665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.115 [2024-11-26 06:30:32.078159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:48.115 06:30:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.115 06:30:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:48.115 [2024-11-26 06:30:32.080656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.049 "name": "raid_bdev1", 00:20:49.049 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:49.049 "strip_size_kb": 0, 00:20:49.049 "state": "online", 00:20:49.049 "raid_level": "raid1", 00:20:49.049 "superblock": true, 00:20:49.049 "num_base_bdevs": 2, 00:20:49.049 "num_base_bdevs_discovered": 2, 00:20:49.049 "num_base_bdevs_operational": 2, 00:20:49.049 "process": { 00:20:49.049 "type": "rebuild", 00:20:49.049 "target": "spare", 00:20:49.049 "progress": { 00:20:49.049 "blocks": 2560, 00:20:49.049 "percent": 32 00:20:49.049 } 00:20:49.049 }, 00:20:49.049 "base_bdevs_list": [ 00:20:49.049 { 00:20:49.049 "name": "spare", 00:20:49.049 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:49.049 "is_configured": true, 00:20:49.049 "data_offset": 256, 00:20:49.049 "data_size": 7936 00:20:49.049 }, 00:20:49.049 { 00:20:49.049 "name": "BaseBdev2", 00:20:49.049 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:49.049 "is_configured": true, 00:20:49.049 "data_offset": 256, 00:20:49.049 "data_size": 7936 00:20:49.049 } 00:20:49.049 ] 00:20:49.049 }' 00:20:49.049 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.307 [2024-11-26 06:30:33.248171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.307 [2024-11-26 06:30:33.290958] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:49.307 [2024-11-26 06:30:33.291050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.307 [2024-11-26 06:30:33.291066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.307 [2024-11-26 06:30:33.291088] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.307 "name": "raid_bdev1", 00:20:49.307 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:49.307 "strip_size_kb": 0, 00:20:49.307 "state": "online", 00:20:49.307 "raid_level": "raid1", 00:20:49.307 "superblock": true, 00:20:49.307 "num_base_bdevs": 2, 00:20:49.307 "num_base_bdevs_discovered": 1, 00:20:49.307 "num_base_bdevs_operational": 1, 00:20:49.307 "base_bdevs_list": [ 00:20:49.307 { 00:20:49.307 "name": null, 00:20:49.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.307 "is_configured": false, 00:20:49.307 "data_offset": 0, 00:20:49.307 "data_size": 7936 00:20:49.307 }, 00:20:49.307 { 00:20:49.307 "name": "BaseBdev2", 00:20:49.307 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:49.307 "is_configured": true, 00:20:49.307 "data_offset": 256, 00:20:49.307 "data_size": 7936 00:20:49.307 } 00:20:49.307 ] 00:20:49.307 }' 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.307 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.872 "name": "raid_bdev1", 00:20:49.872 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:49.872 "strip_size_kb": 0, 00:20:49.872 "state": "online", 00:20:49.872 "raid_level": "raid1", 00:20:49.872 "superblock": true, 00:20:49.872 "num_base_bdevs": 2, 00:20:49.872 "num_base_bdevs_discovered": 1, 00:20:49.872 "num_base_bdevs_operational": 1, 00:20:49.872 "base_bdevs_list": [ 00:20:49.872 { 00:20:49.872 "name": null, 00:20:49.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.872 "is_configured": false, 00:20:49.872 "data_offset": 0, 00:20:49.872 "data_size": 7936 00:20:49.872 }, 00:20:49.872 { 00:20:49.872 "name": "BaseBdev2", 00:20:49.872 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:49.872 "is_configured": true, 00:20:49.872 "data_offset": 256, 00:20:49.872 "data_size": 7936 00:20:49.872 } 00:20:49.872 ] 00:20:49.872 }' 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.872 [2024-11-26 06:30:33.917769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.872 [2024-11-26 06:30:33.936233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.872 06:30:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:49.872 [2024-11-26 06:30:33.938526] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.250 "name": "raid_bdev1", 00:20:51.250 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:51.250 "strip_size_kb": 0, 00:20:51.250 "state": "online", 00:20:51.250 "raid_level": "raid1", 00:20:51.250 "superblock": true, 00:20:51.250 "num_base_bdevs": 2, 00:20:51.250 "num_base_bdevs_discovered": 2, 00:20:51.250 "num_base_bdevs_operational": 2, 00:20:51.250 "process": { 00:20:51.250 "type": "rebuild", 00:20:51.250 "target": "spare", 00:20:51.250 "progress": { 00:20:51.250 "blocks": 2560, 00:20:51.250 "percent": 32 00:20:51.250 } 00:20:51.250 }, 00:20:51.250 "base_bdevs_list": [ 00:20:51.250 { 00:20:51.250 "name": "spare", 00:20:51.250 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:51.250 "is_configured": true, 00:20:51.250 "data_offset": 256, 00:20:51.250 "data_size": 7936 00:20:51.250 }, 00:20:51.250 { 00:20:51.250 "name": "BaseBdev2", 00:20:51.250 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:51.250 "is_configured": true, 00:20:51.250 "data_offset": 256, 00:20:51.250 "data_size": 7936 00:20:51.250 } 00:20:51.250 ] 00:20:51.250 }' 00:20:51.250 06:30:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:51.250 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=708 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.250 "name": "raid_bdev1", 00:20:51.250 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:51.250 "strip_size_kb": 0, 00:20:51.250 "state": "online", 00:20:51.250 "raid_level": "raid1", 00:20:51.250 "superblock": true, 00:20:51.250 "num_base_bdevs": 2, 00:20:51.250 "num_base_bdevs_discovered": 2, 00:20:51.250 "num_base_bdevs_operational": 2, 00:20:51.250 "process": { 00:20:51.250 "type": "rebuild", 00:20:51.250 "target": "spare", 00:20:51.250 "progress": { 00:20:51.250 "blocks": 2816, 00:20:51.250 "percent": 35 00:20:51.250 } 00:20:51.250 }, 00:20:51.250 "base_bdevs_list": [ 00:20:51.250 { 00:20:51.250 "name": "spare", 00:20:51.250 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:51.250 "is_configured": true, 00:20:51.250 "data_offset": 256, 00:20:51.250 "data_size": 7936 00:20:51.250 }, 00:20:51.250 { 00:20:51.250 "name": "BaseBdev2", 00:20:51.250 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:51.250 "is_configured": true, 00:20:51.250 "data_offset": 256, 00:20:51.250 "data_size": 7936 00:20:51.250 } 00:20:51.250 ] 00:20:51.250 }' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.250 06:30:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.188 "name": "raid_bdev1", 00:20:52.188 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:52.188 "strip_size_kb": 0, 00:20:52.188 "state": "online", 00:20:52.188 "raid_level": "raid1", 00:20:52.188 "superblock": true, 00:20:52.188 "num_base_bdevs": 2, 00:20:52.188 "num_base_bdevs_discovered": 2, 00:20:52.188 "num_base_bdevs_operational": 2, 00:20:52.188 "process": { 00:20:52.188 "type": "rebuild", 00:20:52.188 "target": "spare", 00:20:52.188 "progress": { 00:20:52.188 "blocks": 5632, 00:20:52.188 "percent": 70 00:20:52.188 } 00:20:52.188 }, 00:20:52.188 "base_bdevs_list": [ 00:20:52.188 { 00:20:52.188 "name": "spare", 00:20:52.188 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:52.188 "is_configured": true, 00:20:52.188 "data_offset": 256, 00:20:52.188 "data_size": 7936 00:20:52.188 }, 00:20:52.188 { 00:20:52.188 "name": "BaseBdev2", 00:20:52.188 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:52.188 "is_configured": true, 00:20:52.188 "data_offset": 256, 00:20:52.188 "data_size": 7936 00:20:52.188 } 00:20:52.188 ] 00:20:52.188 }' 00:20:52.188 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.447 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.448 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.448 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.448 06:30:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:53.016 [2024-11-26 06:30:37.064632] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:53.016 [2024-11-26 06:30:37.064731] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:53.016 [2024-11-26 06:30:37.064848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.275 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.535 "name": "raid_bdev1", 00:20:53.535 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:53.535 "strip_size_kb": 0, 00:20:53.535 "state": "online", 00:20:53.535 "raid_level": "raid1", 00:20:53.535 "superblock": true, 00:20:53.535 "num_base_bdevs": 2, 00:20:53.535 "num_base_bdevs_discovered": 2, 00:20:53.535 "num_base_bdevs_operational": 2, 00:20:53.535 "base_bdevs_list": [ 00:20:53.535 { 00:20:53.535 "name": "spare", 00:20:53.535 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:53.535 "is_configured": true, 00:20:53.535 "data_offset": 256, 00:20:53.535 "data_size": 7936 00:20:53.535 }, 00:20:53.535 { 00:20:53.535 "name": "BaseBdev2", 00:20:53.535 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:53.535 "is_configured": true, 00:20:53.535 "data_offset": 256, 00:20:53.535 "data_size": 7936 00:20:53.535 } 00:20:53.535 ] 00:20:53.535 }' 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.535 "name": "raid_bdev1", 00:20:53.535 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:53.535 "strip_size_kb": 0, 00:20:53.535 "state": "online", 00:20:53.535 "raid_level": "raid1", 00:20:53.535 "superblock": true, 00:20:53.535 "num_base_bdevs": 2, 00:20:53.535 "num_base_bdevs_discovered": 2, 00:20:53.535 "num_base_bdevs_operational": 2, 00:20:53.535 "base_bdevs_list": [ 00:20:53.535 { 00:20:53.535 "name": "spare", 00:20:53.535 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:53.535 "is_configured": true, 00:20:53.535 "data_offset": 256, 00:20:53.535 "data_size": 7936 00:20:53.535 }, 00:20:53.535 { 00:20:53.535 "name": "BaseBdev2", 00:20:53.535 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:53.535 "is_configured": true, 00:20:53.535 "data_offset": 256, 00:20:53.535 "data_size": 7936 00:20:53.535 } 00:20:53.535 ] 00:20:53.535 }' 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:53.535 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.795 "name": "raid_bdev1", 00:20:53.795 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:53.795 "strip_size_kb": 0, 00:20:53.795 "state": "online", 00:20:53.795 "raid_level": "raid1", 00:20:53.795 "superblock": true, 00:20:53.795 "num_base_bdevs": 2, 00:20:53.795 "num_base_bdevs_discovered": 2, 00:20:53.795 "num_base_bdevs_operational": 2, 00:20:53.795 "base_bdevs_list": [ 00:20:53.795 { 00:20:53.795 "name": "spare", 00:20:53.795 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:53.795 "is_configured": true, 00:20:53.795 "data_offset": 256, 00:20:53.795 "data_size": 7936 00:20:53.795 }, 00:20:53.795 { 00:20:53.795 "name": "BaseBdev2", 00:20:53.795 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:53.795 "is_configured": true, 00:20:53.795 "data_offset": 256, 00:20:53.795 "data_size": 7936 00:20:53.795 } 00:20:53.795 ] 00:20:53.795 }' 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.795 06:30:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.055 [2024-11-26 06:30:38.038098] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:54.055 [2024-11-26 06:30:38.038145] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:54.055 [2024-11-26 06:30:38.038243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.055 [2024-11-26 06:30:38.038328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:54.055 [2024-11-26 06:30:38.038379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:54.055 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:54.056 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:54.056 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:54.056 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:54.056 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:54.056 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:54.056 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:54.315 /dev/nbd0 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.315 1+0 records in 00:20:54.315 1+0 records out 00:20:54.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375102 s, 10.9 MB/s 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:54.315 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:54.575 /dev/nbd1 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:54.575 1+0 records in 00:20:54.575 1+0 records out 00:20:54.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398873 s, 10.3 MB/s 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:54.575 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:54.835 06:30:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.094 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:55.095 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.353 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.354 [2024-11-26 06:30:39.289699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:55.354 [2024-11-26 06:30:39.289772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.354 [2024-11-26 06:30:39.289801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:55.354 [2024-11-26 06:30:39.289811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.354 [2024-11-26 06:30:39.292430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.354 [2024-11-26 06:30:39.292467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:55.354 [2024-11-26 06:30:39.292575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:55.354 [2024-11-26 06:30:39.292635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.354 [2024-11-26 06:30:39.292801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.354 spare 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.354 [2024-11-26 06:30:39.392741] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:55.354 [2024-11-26 06:30:39.392781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:55.354 [2024-11-26 06:30:39.393136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:55.354 [2024-11-26 06:30:39.393347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:55.354 [2024-11-26 06:30:39.393368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:55.354 [2024-11-26 06:30:39.393643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.354 "name": "raid_bdev1", 00:20:55.354 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:55.354 "strip_size_kb": 0, 00:20:55.354 "state": "online", 00:20:55.354 "raid_level": "raid1", 00:20:55.354 "superblock": true, 00:20:55.354 "num_base_bdevs": 2, 00:20:55.354 "num_base_bdevs_discovered": 2, 00:20:55.354 "num_base_bdevs_operational": 2, 00:20:55.354 "base_bdevs_list": [ 00:20:55.354 { 00:20:55.354 "name": "spare", 00:20:55.354 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:55.354 "is_configured": true, 00:20:55.354 "data_offset": 256, 00:20:55.354 "data_size": 7936 00:20:55.354 }, 00:20:55.354 { 00:20:55.354 "name": "BaseBdev2", 00:20:55.354 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:55.354 "is_configured": true, 00:20:55.354 "data_offset": 256, 00:20:55.354 "data_size": 7936 00:20:55.354 } 00:20:55.354 ] 00:20:55.354 }' 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.354 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.922 "name": "raid_bdev1", 00:20:55.922 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:55.922 "strip_size_kb": 0, 00:20:55.922 "state": "online", 00:20:55.922 "raid_level": "raid1", 00:20:55.922 "superblock": true, 00:20:55.922 "num_base_bdevs": 2, 00:20:55.922 "num_base_bdevs_discovered": 2, 00:20:55.922 "num_base_bdevs_operational": 2, 00:20:55.922 "base_bdevs_list": [ 00:20:55.922 { 00:20:55.922 "name": "spare", 00:20:55.922 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:55.922 "is_configured": true, 00:20:55.922 "data_offset": 256, 00:20:55.922 "data_size": 7936 00:20:55.922 }, 00:20:55.922 { 00:20:55.922 "name": "BaseBdev2", 00:20:55.922 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:55.922 "is_configured": true, 00:20:55.922 "data_offset": 256, 00:20:55.922 "data_size": 7936 00:20:55.922 } 00:20:55.922 ] 00:20:55.922 }' 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.922 06:30:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.922 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.923 [2024-11-26 06:30:40.020615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:55.923 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.182 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.182 "name": "raid_bdev1", 00:20:56.182 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:56.182 "strip_size_kb": 0, 00:20:56.182 "state": "online", 00:20:56.182 "raid_level": "raid1", 00:20:56.182 "superblock": true, 00:20:56.182 "num_base_bdevs": 2, 00:20:56.182 "num_base_bdevs_discovered": 1, 00:20:56.182 "num_base_bdevs_operational": 1, 00:20:56.182 "base_bdevs_list": [ 00:20:56.182 { 00:20:56.182 "name": null, 00:20:56.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.182 "is_configured": false, 00:20:56.182 "data_offset": 0, 00:20:56.182 "data_size": 7936 00:20:56.182 }, 00:20:56.182 { 00:20:56.182 "name": "BaseBdev2", 00:20:56.182 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:56.182 "is_configured": true, 00:20:56.182 "data_offset": 256, 00:20:56.182 "data_size": 7936 00:20:56.182 } 00:20:56.182 ] 00:20:56.182 }' 00:20:56.182 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.182 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.441 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:56.441 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.441 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:56.441 [2024-11-26 06:30:40.472035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.441 [2024-11-26 06:30:40.472335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:56.441 [2024-11-26 06:30:40.472368] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:56.441 [2024-11-26 06:30:40.472419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:56.441 [2024-11-26 06:30:40.490163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:56.441 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.441 06:30:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:56.441 [2024-11-26 06:30:40.492397] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.379 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.639 "name": "raid_bdev1", 00:20:57.639 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:57.639 "strip_size_kb": 0, 00:20:57.639 "state": "online", 00:20:57.639 "raid_level": "raid1", 00:20:57.639 "superblock": true, 00:20:57.639 "num_base_bdevs": 2, 00:20:57.639 "num_base_bdevs_discovered": 2, 00:20:57.639 "num_base_bdevs_operational": 2, 00:20:57.639 "process": { 00:20:57.639 "type": "rebuild", 00:20:57.639 "target": "spare", 00:20:57.639 "progress": { 00:20:57.639 "blocks": 2560, 00:20:57.639 "percent": 32 00:20:57.639 } 00:20:57.639 }, 00:20:57.639 "base_bdevs_list": [ 00:20:57.639 { 00:20:57.639 "name": "spare", 00:20:57.639 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:57.639 "is_configured": true, 00:20:57.639 "data_offset": 256, 00:20:57.639 "data_size": 7936 00:20:57.639 }, 00:20:57.639 { 00:20:57.639 "name": "BaseBdev2", 00:20:57.639 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:57.639 "is_configured": true, 00:20:57.639 "data_offset": 256, 00:20:57.639 "data_size": 7936 00:20:57.639 } 00:20:57.639 ] 00:20:57.639 }' 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 [2024-11-26 06:30:41.652601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.639 [2024-11-26 06:30:41.702257] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:57.639 [2024-11-26 06:30:41.702327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.639 [2024-11-26 06:30:41.702343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:57.639 [2024-11-26 06:30:41.702353] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:57.639 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.899 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.899 "name": "raid_bdev1", 00:20:57.899 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:57.899 "strip_size_kb": 0, 00:20:57.899 "state": "online", 00:20:57.899 "raid_level": "raid1", 00:20:57.899 "superblock": true, 00:20:57.899 "num_base_bdevs": 2, 00:20:57.899 "num_base_bdevs_discovered": 1, 00:20:57.899 "num_base_bdevs_operational": 1, 00:20:57.899 "base_bdevs_list": [ 00:20:57.899 { 00:20:57.899 "name": null, 00:20:57.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.899 "is_configured": false, 00:20:57.899 "data_offset": 0, 00:20:57.899 "data_size": 7936 00:20:57.899 }, 00:20:57.899 { 00:20:57.899 "name": "BaseBdev2", 00:20:57.899 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:57.899 "is_configured": true, 00:20:57.899 "data_offset": 256, 00:20:57.899 "data_size": 7936 00:20:57.899 } 00:20:57.899 ] 00:20:57.899 }' 00:20:57.899 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.899 06:30:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.158 06:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:58.158 06:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.158 06:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:58.158 [2024-11-26 06:30:42.184301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:58.158 [2024-11-26 06:30:42.184390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.158 [2024-11-26 06:30:42.184432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:58.158 [2024-11-26 06:30:42.184446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.158 [2024-11-26 06:30:42.185047] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.158 [2024-11-26 06:30:42.185093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:58.158 [2024-11-26 06:30:42.185223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:58.158 [2024-11-26 06:30:42.185248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:58.158 [2024-11-26 06:30:42.185260] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:58.158 [2024-11-26 06:30:42.185292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.158 [2024-11-26 06:30:42.202828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:58.158 spare 00:20:58.158 06:30:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.158 06:30:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:58.158 [2024-11-26 06:30:42.205131] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.165 "name": "raid_bdev1", 00:20:59.165 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:59.165 "strip_size_kb": 0, 00:20:59.165 "state": "online", 00:20:59.165 "raid_level": "raid1", 00:20:59.165 "superblock": true, 00:20:59.165 "num_base_bdevs": 2, 00:20:59.165 "num_base_bdevs_discovered": 2, 00:20:59.165 "num_base_bdevs_operational": 2, 00:20:59.165 "process": { 00:20:59.165 "type": "rebuild", 00:20:59.165 "target": "spare", 00:20:59.165 "progress": { 00:20:59.165 "blocks": 2560, 00:20:59.165 "percent": 32 00:20:59.165 } 00:20:59.165 }, 00:20:59.165 "base_bdevs_list": [ 00:20:59.165 { 00:20:59.165 "name": "spare", 00:20:59.165 "uuid": "8f77efb5-8543-5ba1-9e14-ecd78073c78a", 00:20:59.165 "is_configured": true, 00:20:59.165 "data_offset": 256, 00:20:59.165 "data_size": 7936 00:20:59.165 }, 00:20:59.165 { 00:20:59.165 "name": "BaseBdev2", 00:20:59.165 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:59.165 "is_configured": true, 00:20:59.165 "data_offset": 256, 00:20:59.165 "data_size": 7936 00:20:59.165 } 00:20:59.165 ] 00:20:59.165 }' 00:20:59.165 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.425 [2024-11-26 06:30:43.361321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:59.425 [2024-11-26 06:30:43.414947] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:59.425 [2024-11-26 06:30:43.415031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.425 [2024-11-26 06:30:43.415050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:59.425 [2024-11-26 06:30:43.415058] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.425 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.426 "name": "raid_bdev1", 00:20:59.426 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:59.426 "strip_size_kb": 0, 00:20:59.426 "state": "online", 00:20:59.426 "raid_level": "raid1", 00:20:59.426 "superblock": true, 00:20:59.426 "num_base_bdevs": 2, 00:20:59.426 "num_base_bdevs_discovered": 1, 00:20:59.426 "num_base_bdevs_operational": 1, 00:20:59.426 "base_bdevs_list": [ 00:20:59.426 { 00:20:59.426 "name": null, 00:20:59.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.426 "is_configured": false, 00:20:59.426 "data_offset": 0, 00:20:59.426 "data_size": 7936 00:20:59.426 }, 00:20:59.426 { 00:20:59.426 "name": "BaseBdev2", 00:20:59.426 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:59.426 "is_configured": true, 00:20:59.426 "data_offset": 256, 00:20:59.426 "data_size": 7936 00:20:59.426 } 00:20:59.426 ] 00:20:59.426 }' 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.426 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.995 "name": "raid_bdev1", 00:20:59.995 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:20:59.995 "strip_size_kb": 0, 00:20:59.995 "state": "online", 00:20:59.995 "raid_level": "raid1", 00:20:59.995 "superblock": true, 00:20:59.995 "num_base_bdevs": 2, 00:20:59.995 "num_base_bdevs_discovered": 1, 00:20:59.995 "num_base_bdevs_operational": 1, 00:20:59.995 "base_bdevs_list": [ 00:20:59.995 { 00:20:59.995 "name": null, 00:20:59.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.995 "is_configured": false, 00:20:59.995 "data_offset": 0, 00:20:59.995 "data_size": 7936 00:20:59.995 }, 00:20:59.995 { 00:20:59.995 "name": "BaseBdev2", 00:20:59.995 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:20:59.995 "is_configured": true, 00:20:59.995 "data_offset": 256, 00:20:59.995 "data_size": 7936 00:20:59.995 } 00:20:59.995 ] 00:20:59.995 }' 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:59.995 06:30:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.995 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:59.995 [2024-11-26 06:30:44.056506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:59.995 [2024-11-26 06:30:44.056572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.995 [2024-11-26 06:30:44.056600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:59.995 [2024-11-26 06:30:44.056619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.995 [2024-11-26 06:30:44.057205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.995 [2024-11-26 06:30:44.057232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:59.995 [2024-11-26 06:30:44.057339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:59.995 [2024-11-26 06:30:44.057357] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:59.996 [2024-11-26 06:30:44.057369] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:59.996 [2024-11-26 06:30:44.057382] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:59.996 BaseBdev1 00:20:59.996 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.996 06:30:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.376 "name": "raid_bdev1", 00:21:01.376 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:21:01.376 "strip_size_kb": 0, 00:21:01.376 "state": "online", 00:21:01.376 "raid_level": "raid1", 00:21:01.376 "superblock": true, 00:21:01.376 "num_base_bdevs": 2, 00:21:01.376 "num_base_bdevs_discovered": 1, 00:21:01.376 "num_base_bdevs_operational": 1, 00:21:01.376 "base_bdevs_list": [ 00:21:01.376 { 00:21:01.376 "name": null, 00:21:01.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.376 "is_configured": false, 00:21:01.376 "data_offset": 0, 00:21:01.376 "data_size": 7936 00:21:01.376 }, 00:21:01.376 { 00:21:01.376 "name": "BaseBdev2", 00:21:01.376 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:21:01.376 "is_configured": true, 00:21:01.376 "data_offset": 256, 00:21:01.376 "data_size": 7936 00:21:01.376 } 00:21:01.376 ] 00:21:01.376 }' 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.376 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.689 "name": "raid_bdev1", 00:21:01.689 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:21:01.689 "strip_size_kb": 0, 00:21:01.689 "state": "online", 00:21:01.689 "raid_level": "raid1", 00:21:01.689 "superblock": true, 00:21:01.689 "num_base_bdevs": 2, 00:21:01.689 "num_base_bdevs_discovered": 1, 00:21:01.689 "num_base_bdevs_operational": 1, 00:21:01.689 "base_bdevs_list": [ 00:21:01.689 { 00:21:01.689 "name": null, 00:21:01.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:01.689 "is_configured": false, 00:21:01.689 "data_offset": 0, 00:21:01.689 "data_size": 7936 00:21:01.689 }, 00:21:01.689 { 00:21:01.689 "name": "BaseBdev2", 00:21:01.689 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:21:01.689 "is_configured": true, 00:21:01.689 "data_offset": 256, 00:21:01.689 "data_size": 7936 00:21:01.689 } 00:21:01.689 ] 00:21:01.689 }' 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:01.689 [2024-11-26 06:30:45.657936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.689 [2024-11-26 06:30:45.658163] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:01.689 [2024-11-26 06:30:45.658179] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:01.689 request: 00:21:01.689 { 00:21:01.689 "base_bdev": "BaseBdev1", 00:21:01.689 "raid_bdev": "raid_bdev1", 00:21:01.689 "method": "bdev_raid_add_base_bdev", 00:21:01.689 "req_id": 1 00:21:01.689 } 00:21:01.689 Got JSON-RPC error response 00:21:01.689 response: 00:21:01.689 { 00:21:01.689 "code": -22, 00:21:01.689 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:01.689 } 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:01.689 06:30:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.629 "name": "raid_bdev1", 00:21:02.629 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:21:02.629 "strip_size_kb": 0, 00:21:02.629 "state": "online", 00:21:02.629 "raid_level": "raid1", 00:21:02.629 "superblock": true, 00:21:02.629 "num_base_bdevs": 2, 00:21:02.629 "num_base_bdevs_discovered": 1, 00:21:02.629 "num_base_bdevs_operational": 1, 00:21:02.629 "base_bdevs_list": [ 00:21:02.629 { 00:21:02.629 "name": null, 00:21:02.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.629 "is_configured": false, 00:21:02.629 "data_offset": 0, 00:21:02.629 "data_size": 7936 00:21:02.629 }, 00:21:02.629 { 00:21:02.629 "name": "BaseBdev2", 00:21:02.629 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:21:02.629 "is_configured": true, 00:21:02.629 "data_offset": 256, 00:21:02.629 "data_size": 7936 00:21:02.629 } 00:21:02.629 ] 00:21:02.629 }' 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.629 06:30:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.199 "name": "raid_bdev1", 00:21:03.199 "uuid": "4501a39a-ec3c-404f-b2a9-94a34e0c60ba", 00:21:03.199 "strip_size_kb": 0, 00:21:03.199 "state": "online", 00:21:03.199 "raid_level": "raid1", 00:21:03.199 "superblock": true, 00:21:03.199 "num_base_bdevs": 2, 00:21:03.199 "num_base_bdevs_discovered": 1, 00:21:03.199 "num_base_bdevs_operational": 1, 00:21:03.199 "base_bdevs_list": [ 00:21:03.199 { 00:21:03.199 "name": null, 00:21:03.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.199 "is_configured": false, 00:21:03.199 "data_offset": 0, 00:21:03.199 "data_size": 7936 00:21:03.199 }, 00:21:03.199 { 00:21:03.199 "name": "BaseBdev2", 00:21:03.199 "uuid": "012329b4-55c0-5069-8338-516e65f39566", 00:21:03.199 "is_configured": true, 00:21:03.199 "data_offset": 256, 00:21:03.199 "data_size": 7936 00:21:03.199 } 00:21:03.199 ] 00:21:03.199 }' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 87067 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87067 ']' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87067 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87067 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.199 killing process with pid 87067 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87067' 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87067 00:21:03.199 Received shutdown signal, test time was about 60.000000 seconds 00:21:03.199 00:21:03.199 Latency(us) 00:21:03.199 [2024-11-26T06:30:47.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.199 [2024-11-26T06:30:47.336Z] =================================================================================================================== 00:21:03.199 [2024-11-26T06:30:47.336Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.199 [2024-11-26 06:30:47.273433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.199 06:30:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87067 00:21:03.199 [2024-11-26 06:30:47.273607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.200 [2024-11-26 06:30:47.273671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.200 [2024-11-26 06:30:47.273685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:03.459 [2024-11-26 06:30:47.587041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.840 06:30:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:21:04.840 00:21:04.840 real 0m20.070s 00:21:04.840 user 0m25.946s 00:21:04.840 sys 0m2.902s 00:21:04.840 06:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.840 06:30:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:04.840 ************************************ 00:21:04.840 END TEST raid_rebuild_test_sb_4k 00:21:04.840 ************************************ 00:21:04.840 06:30:48 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:21:04.840 06:30:48 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:21:04.840 06:30:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:04.840 06:30:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.840 06:30:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.840 ************************************ 00:21:04.840 START TEST raid_state_function_test_sb_md_separate 00:21:04.840 ************************************ 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:04.840 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87758 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87758' 00:21:04.841 Process raid pid: 87758 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87758 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87758 ']' 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.841 06:30:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.841 [2024-11-26 06:30:48.957384] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:04.841 [2024-11-26 06:30:48.957531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.104 [2024-11-26 06:30:49.142290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.368 [2024-11-26 06:30:49.285159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.627 [2024-11-26 06:30:49.532192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.627 [2024-11-26 06:30:49.532249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.887 [2024-11-26 06:30:49.817402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:05.887 [2024-11-26 06:30:49.817470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:05.887 [2024-11-26 06:30:49.817483] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:05.887 [2024-11-26 06:30:49.817494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.887 "name": "Existed_Raid", 00:21:05.887 "uuid": "c65eaf29-6649-4fcb-b18a-218455ab2246", 00:21:05.887 "strip_size_kb": 0, 00:21:05.887 "state": "configuring", 00:21:05.887 "raid_level": "raid1", 00:21:05.887 "superblock": true, 00:21:05.887 "num_base_bdevs": 2, 00:21:05.887 "num_base_bdevs_discovered": 0, 00:21:05.887 "num_base_bdevs_operational": 2, 00:21:05.887 "base_bdevs_list": [ 00:21:05.887 { 00:21:05.887 "name": "BaseBdev1", 00:21:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.887 "is_configured": false, 00:21:05.887 "data_offset": 0, 00:21:05.887 "data_size": 0 00:21:05.887 }, 00:21:05.887 { 00:21:05.887 "name": "BaseBdev2", 00:21:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.887 "is_configured": false, 00:21:05.887 "data_offset": 0, 00:21:05.887 "data_size": 0 00:21:05.887 } 00:21:05.887 ] 00:21:05.887 }' 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.887 06:30:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 [2024-11-26 06:30:50.292568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:06.458 [2024-11-26 06:30:50.292615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 [2024-11-26 06:30:50.304523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:06.458 [2024-11-26 06:30:50.304575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:06.458 [2024-11-26 06:30:50.304586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.458 [2024-11-26 06:30:50.304600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 [2024-11-26 06:30:50.362013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.458 BaseBdev1 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.458 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.458 [ 00:21:06.458 { 00:21:06.458 "name": "BaseBdev1", 00:21:06.458 "aliases": [ 00:21:06.458 "38812dc4-5db4-422b-8589-a07dc8e2f7ea" 00:21:06.458 ], 00:21:06.458 "product_name": "Malloc disk", 00:21:06.458 "block_size": 4096, 00:21:06.458 "num_blocks": 8192, 00:21:06.458 "uuid": "38812dc4-5db4-422b-8589-a07dc8e2f7ea", 00:21:06.458 "md_size": 32, 00:21:06.458 "md_interleave": false, 00:21:06.458 "dif_type": 0, 00:21:06.458 "assigned_rate_limits": { 00:21:06.458 "rw_ios_per_sec": 0, 00:21:06.458 "rw_mbytes_per_sec": 0, 00:21:06.458 "r_mbytes_per_sec": 0, 00:21:06.458 "w_mbytes_per_sec": 0 00:21:06.458 }, 00:21:06.458 "claimed": true, 00:21:06.458 "claim_type": "exclusive_write", 00:21:06.458 "zoned": false, 00:21:06.458 "supported_io_types": { 00:21:06.458 "read": true, 00:21:06.458 "write": true, 00:21:06.458 "unmap": true, 00:21:06.458 "flush": true, 00:21:06.458 "reset": true, 00:21:06.458 "nvme_admin": false, 00:21:06.458 "nvme_io": false, 00:21:06.458 "nvme_io_md": false, 00:21:06.458 "write_zeroes": true, 00:21:06.458 "zcopy": true, 00:21:06.458 "get_zone_info": false, 00:21:06.458 "zone_management": false, 00:21:06.458 "zone_append": false, 00:21:06.458 "compare": false, 00:21:06.458 "compare_and_write": false, 00:21:06.458 "abort": true, 00:21:06.459 "seek_hole": false, 00:21:06.459 "seek_data": false, 00:21:06.459 "copy": true, 00:21:06.459 "nvme_iov_md": false 00:21:06.459 }, 00:21:06.459 "memory_domains": [ 00:21:06.459 { 00:21:06.459 "dma_device_id": "system", 00:21:06.459 "dma_device_type": 1 00:21:06.459 }, 00:21:06.459 { 00:21:06.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.459 "dma_device_type": 2 00:21:06.459 } 00:21:06.459 ], 00:21:06.459 "driver_specific": {} 00:21:06.459 } 00:21:06.459 ] 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.459 "name": "Existed_Raid", 00:21:06.459 "uuid": "23238fee-6be5-4e00-beac-9057b8f12912", 00:21:06.459 "strip_size_kb": 0, 00:21:06.459 "state": "configuring", 00:21:06.459 "raid_level": "raid1", 00:21:06.459 "superblock": true, 00:21:06.459 "num_base_bdevs": 2, 00:21:06.459 "num_base_bdevs_discovered": 1, 00:21:06.459 "num_base_bdevs_operational": 2, 00:21:06.459 "base_bdevs_list": [ 00:21:06.459 { 00:21:06.459 "name": "BaseBdev1", 00:21:06.459 "uuid": "38812dc4-5db4-422b-8589-a07dc8e2f7ea", 00:21:06.459 "is_configured": true, 00:21:06.459 "data_offset": 256, 00:21:06.459 "data_size": 7936 00:21:06.459 }, 00:21:06.459 { 00:21:06.459 "name": "BaseBdev2", 00:21:06.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.459 "is_configured": false, 00:21:06.459 "data_offset": 0, 00:21:06.459 "data_size": 0 00:21:06.459 } 00:21:06.459 ] 00:21:06.459 }' 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.459 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.719 [2024-11-26 06:30:50.829310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:06.719 [2024-11-26 06:30:50.829381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.719 [2024-11-26 06:30:50.841336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.719 [2024-11-26 06:30:50.843744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.719 [2024-11-26 06:30:50.844007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.719 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.720 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.979 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.980 "name": "Existed_Raid", 00:21:06.980 "uuid": "af9261b0-0f56-47e6-a15b-9e2268ff1a2f", 00:21:06.980 "strip_size_kb": 0, 00:21:06.980 "state": "configuring", 00:21:06.980 "raid_level": "raid1", 00:21:06.980 "superblock": true, 00:21:06.980 "num_base_bdevs": 2, 00:21:06.980 "num_base_bdevs_discovered": 1, 00:21:06.980 "num_base_bdevs_operational": 2, 00:21:06.980 "base_bdevs_list": [ 00:21:06.980 { 00:21:06.980 "name": "BaseBdev1", 00:21:06.980 "uuid": "38812dc4-5db4-422b-8589-a07dc8e2f7ea", 00:21:06.980 "is_configured": true, 00:21:06.980 "data_offset": 256, 00:21:06.980 "data_size": 7936 00:21:06.980 }, 00:21:06.980 { 00:21:06.980 "name": "BaseBdev2", 00:21:06.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.980 "is_configured": false, 00:21:06.980 "data_offset": 0, 00:21:06.980 "data_size": 0 00:21:06.980 } 00:21:06.980 ] 00:21:06.980 }' 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.980 06:30:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.240 [2024-11-26 06:30:51.361049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.240 [2024-11-26 06:30:51.361339] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:07.240 [2024-11-26 06:30:51.361356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:07.240 [2024-11-26 06:30:51.361457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:07.240 [2024-11-26 06:30:51.361659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:07.240 [2024-11-26 06:30:51.361690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:07.240 [2024-11-26 06:30:51.361803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.240 BaseBdev2 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.240 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.500 [ 00:21:07.500 { 00:21:07.500 "name": "BaseBdev2", 00:21:07.500 "aliases": [ 00:21:07.500 "f33b7b2c-36fc-4529-9471-3f3510f9dc58" 00:21:07.500 ], 00:21:07.500 "product_name": "Malloc disk", 00:21:07.500 "block_size": 4096, 00:21:07.500 "num_blocks": 8192, 00:21:07.500 "uuid": "f33b7b2c-36fc-4529-9471-3f3510f9dc58", 00:21:07.500 "md_size": 32, 00:21:07.500 "md_interleave": false, 00:21:07.500 "dif_type": 0, 00:21:07.500 "assigned_rate_limits": { 00:21:07.500 "rw_ios_per_sec": 0, 00:21:07.500 "rw_mbytes_per_sec": 0, 00:21:07.500 "r_mbytes_per_sec": 0, 00:21:07.500 "w_mbytes_per_sec": 0 00:21:07.500 }, 00:21:07.500 "claimed": true, 00:21:07.500 "claim_type": "exclusive_write", 00:21:07.500 "zoned": false, 00:21:07.500 "supported_io_types": { 00:21:07.500 "read": true, 00:21:07.500 "write": true, 00:21:07.500 "unmap": true, 00:21:07.500 "flush": true, 00:21:07.500 "reset": true, 00:21:07.500 "nvme_admin": false, 00:21:07.500 "nvme_io": false, 00:21:07.500 "nvme_io_md": false, 00:21:07.500 "write_zeroes": true, 00:21:07.500 "zcopy": true, 00:21:07.500 "get_zone_info": false, 00:21:07.500 "zone_management": false, 00:21:07.500 "zone_append": false, 00:21:07.500 "compare": false, 00:21:07.500 "compare_and_write": false, 00:21:07.500 "abort": true, 00:21:07.500 "seek_hole": false, 00:21:07.500 "seek_data": false, 00:21:07.500 "copy": true, 00:21:07.500 "nvme_iov_md": false 00:21:07.500 }, 00:21:07.500 "memory_domains": [ 00:21:07.500 { 00:21:07.500 "dma_device_id": "system", 00:21:07.500 "dma_device_type": 1 00:21:07.500 }, 00:21:07.500 { 00:21:07.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.500 "dma_device_type": 2 00:21:07.500 } 00:21:07.500 ], 00:21:07.500 "driver_specific": {} 00:21:07.500 } 00:21:07.500 ] 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.500 "name": "Existed_Raid", 00:21:07.500 "uuid": "af9261b0-0f56-47e6-a15b-9e2268ff1a2f", 00:21:07.500 "strip_size_kb": 0, 00:21:07.500 "state": "online", 00:21:07.500 "raid_level": "raid1", 00:21:07.500 "superblock": true, 00:21:07.500 "num_base_bdevs": 2, 00:21:07.500 "num_base_bdevs_discovered": 2, 00:21:07.500 "num_base_bdevs_operational": 2, 00:21:07.500 "base_bdevs_list": [ 00:21:07.500 { 00:21:07.500 "name": "BaseBdev1", 00:21:07.500 "uuid": "38812dc4-5db4-422b-8589-a07dc8e2f7ea", 00:21:07.500 "is_configured": true, 00:21:07.500 "data_offset": 256, 00:21:07.500 "data_size": 7936 00:21:07.500 }, 00:21:07.500 { 00:21:07.500 "name": "BaseBdev2", 00:21:07.500 "uuid": "f33b7b2c-36fc-4529-9471-3f3510f9dc58", 00:21:07.500 "is_configured": true, 00:21:07.500 "data_offset": 256, 00:21:07.500 "data_size": 7936 00:21:07.500 } 00:21:07.500 ] 00:21:07.500 }' 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.500 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:07.760 [2024-11-26 06:30:51.856655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:07.760 "name": "Existed_Raid", 00:21:07.760 "aliases": [ 00:21:07.760 "af9261b0-0f56-47e6-a15b-9e2268ff1a2f" 00:21:07.760 ], 00:21:07.760 "product_name": "Raid Volume", 00:21:07.760 "block_size": 4096, 00:21:07.760 "num_blocks": 7936, 00:21:07.760 "uuid": "af9261b0-0f56-47e6-a15b-9e2268ff1a2f", 00:21:07.760 "md_size": 32, 00:21:07.760 "md_interleave": false, 00:21:07.760 "dif_type": 0, 00:21:07.760 "assigned_rate_limits": { 00:21:07.760 "rw_ios_per_sec": 0, 00:21:07.760 "rw_mbytes_per_sec": 0, 00:21:07.760 "r_mbytes_per_sec": 0, 00:21:07.760 "w_mbytes_per_sec": 0 00:21:07.760 }, 00:21:07.760 "claimed": false, 00:21:07.760 "zoned": false, 00:21:07.760 "supported_io_types": { 00:21:07.760 "read": true, 00:21:07.760 "write": true, 00:21:07.760 "unmap": false, 00:21:07.760 "flush": false, 00:21:07.760 "reset": true, 00:21:07.760 "nvme_admin": false, 00:21:07.760 "nvme_io": false, 00:21:07.760 "nvme_io_md": false, 00:21:07.760 "write_zeroes": true, 00:21:07.760 "zcopy": false, 00:21:07.760 "get_zone_info": false, 00:21:07.760 "zone_management": false, 00:21:07.760 "zone_append": false, 00:21:07.760 "compare": false, 00:21:07.760 "compare_and_write": false, 00:21:07.760 "abort": false, 00:21:07.760 "seek_hole": false, 00:21:07.760 "seek_data": false, 00:21:07.760 "copy": false, 00:21:07.760 "nvme_iov_md": false 00:21:07.760 }, 00:21:07.760 "memory_domains": [ 00:21:07.760 { 00:21:07.760 "dma_device_id": "system", 00:21:07.760 "dma_device_type": 1 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.760 "dma_device_type": 2 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "dma_device_id": "system", 00:21:07.760 "dma_device_type": 1 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.760 "dma_device_type": 2 00:21:07.760 } 00:21:07.760 ], 00:21:07.760 "driver_specific": { 00:21:07.760 "raid": { 00:21:07.760 "uuid": "af9261b0-0f56-47e6-a15b-9e2268ff1a2f", 00:21:07.760 "strip_size_kb": 0, 00:21:07.760 "state": "online", 00:21:07.760 "raid_level": "raid1", 00:21:07.760 "superblock": true, 00:21:07.760 "num_base_bdevs": 2, 00:21:07.760 "num_base_bdevs_discovered": 2, 00:21:07.760 "num_base_bdevs_operational": 2, 00:21:07.760 "base_bdevs_list": [ 00:21:07.760 { 00:21:07.760 "name": "BaseBdev1", 00:21:07.760 "uuid": "38812dc4-5db4-422b-8589-a07dc8e2f7ea", 00:21:07.760 "is_configured": true, 00:21:07.760 "data_offset": 256, 00:21:07.760 "data_size": 7936 00:21:07.760 }, 00:21:07.760 { 00:21:07.760 "name": "BaseBdev2", 00:21:07.760 "uuid": "f33b7b2c-36fc-4529-9471-3f3510f9dc58", 00:21:07.760 "is_configured": true, 00:21:07.760 "data_offset": 256, 00:21:07.760 "data_size": 7936 00:21:07.760 } 00:21:07.760 ] 00:21:07.760 } 00:21:07.760 } 00:21:07.760 }' 00:21:07.760 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:08.019 BaseBdev2' 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.019 06:30:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.019 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.019 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.020 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.020 [2024-11-26 06:30:52.083919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.278 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.278 "name": "Existed_Raid", 00:21:08.278 "uuid": "af9261b0-0f56-47e6-a15b-9e2268ff1a2f", 00:21:08.278 "strip_size_kb": 0, 00:21:08.278 "state": "online", 00:21:08.278 "raid_level": "raid1", 00:21:08.278 "superblock": true, 00:21:08.278 "num_base_bdevs": 2, 00:21:08.278 "num_base_bdevs_discovered": 1, 00:21:08.278 "num_base_bdevs_operational": 1, 00:21:08.278 "base_bdevs_list": [ 00:21:08.278 { 00:21:08.278 "name": null, 00:21:08.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.278 "is_configured": false, 00:21:08.278 "data_offset": 0, 00:21:08.278 "data_size": 7936 00:21:08.278 }, 00:21:08.278 { 00:21:08.278 "name": "BaseBdev2", 00:21:08.279 "uuid": "f33b7b2c-36fc-4529-9471-3f3510f9dc58", 00:21:08.279 "is_configured": true, 00:21:08.279 "data_offset": 256, 00:21:08.279 "data_size": 7936 00:21:08.279 } 00:21:08.279 ] 00:21:08.279 }' 00:21:08.279 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.279 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.538 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:08.538 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:08.538 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.538 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.538 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.798 [2024-11-26 06:30:52.724832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:08.798 [2024-11-26 06:30:52.724969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.798 [2024-11-26 06:30:52.833845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.798 [2024-11-26 06:30:52.833909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.798 [2024-11-26 06:30:52.833922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87758 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87758 ']' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87758 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87758 00:21:08.798 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.057 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.057 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87758' 00:21:09.057 killing process with pid 87758 00:21:09.057 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87758 00:21:09.057 [2024-11-26 06:30:52.930810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:09.057 06:30:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87758 00:21:09.057 [2024-11-26 06:30:52.949721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.434 06:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:10.434 00:21:10.434 real 0m5.320s 00:21:10.434 user 0m7.460s 00:21:10.434 sys 0m1.070s 00:21:10.434 ************************************ 00:21:10.434 END TEST raid_state_function_test_sb_md_separate 00:21:10.434 ************************************ 00:21:10.434 06:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.434 06:30:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.434 06:30:54 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:10.434 06:30:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:10.434 06:30:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.434 06:30:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.434 ************************************ 00:21:10.434 START TEST raid_superblock_test_md_separate 00:21:10.434 ************************************ 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=88010 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 88010 00:21:10.434 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88010 ']' 00:21:10.435 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.435 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.435 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.435 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.435 06:30:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.435 [2024-11-26 06:30:54.342546] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:10.435 [2024-11-26 06:30:54.342707] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88010 ] 00:21:10.435 [2024-11-26 06:30:54.520833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.694 [2024-11-26 06:30:54.658430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.953 [2024-11-26 06:30:54.891128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:10.953 [2024-11-26 06:30:54.891170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.213 malloc1 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.213 [2024-11-26 06:30:55.274339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:11.213 [2024-11-26 06:30:55.274460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.213 [2024-11-26 06:30:55.274531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:11.213 [2024-11-26 06:30:55.274576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.213 [2024-11-26 06:30:55.276944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.213 [2024-11-26 06:30:55.277017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:11.213 pt1 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:11.213 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.214 malloc2 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.214 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.214 [2024-11-26 06:30:55.341044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:11.214 [2024-11-26 06:30:55.341161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.214 [2024-11-26 06:30:55.341204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:11.214 [2024-11-26 06:30:55.341236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.214 [2024-11-26 06:30:55.343508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.214 [2024-11-26 06:30:55.343580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:11.474 pt2 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.474 [2024-11-26 06:30:55.353048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:11.474 [2024-11-26 06:30:55.355082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:11.474 [2024-11-26 06:30:55.355267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:11.474 [2024-11-26 06:30:55.355282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:11.474 [2024-11-26 06:30:55.355360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:11.474 [2024-11-26 06:30:55.355497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:11.474 [2024-11-26 06:30:55.355509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:11.474 [2024-11-26 06:30:55.355603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.474 "name": "raid_bdev1", 00:21:11.474 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:11.474 "strip_size_kb": 0, 00:21:11.474 "state": "online", 00:21:11.474 "raid_level": "raid1", 00:21:11.474 "superblock": true, 00:21:11.474 "num_base_bdevs": 2, 00:21:11.474 "num_base_bdevs_discovered": 2, 00:21:11.474 "num_base_bdevs_operational": 2, 00:21:11.474 "base_bdevs_list": [ 00:21:11.474 { 00:21:11.474 "name": "pt1", 00:21:11.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.474 "is_configured": true, 00:21:11.474 "data_offset": 256, 00:21:11.474 "data_size": 7936 00:21:11.474 }, 00:21:11.474 { 00:21:11.474 "name": "pt2", 00:21:11.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.474 "is_configured": true, 00:21:11.474 "data_offset": 256, 00:21:11.474 "data_size": 7936 00:21:11.474 } 00:21:11.474 ] 00:21:11.474 }' 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.474 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.733 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.733 [2024-11-26 06:30:55.852558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:11.992 "name": "raid_bdev1", 00:21:11.992 "aliases": [ 00:21:11.992 "be4581c5-8951-4736-ad07-dd2c3b884356" 00:21:11.992 ], 00:21:11.992 "product_name": "Raid Volume", 00:21:11.992 "block_size": 4096, 00:21:11.992 "num_blocks": 7936, 00:21:11.992 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:11.992 "md_size": 32, 00:21:11.992 "md_interleave": false, 00:21:11.992 "dif_type": 0, 00:21:11.992 "assigned_rate_limits": { 00:21:11.992 "rw_ios_per_sec": 0, 00:21:11.992 "rw_mbytes_per_sec": 0, 00:21:11.992 "r_mbytes_per_sec": 0, 00:21:11.992 "w_mbytes_per_sec": 0 00:21:11.992 }, 00:21:11.992 "claimed": false, 00:21:11.992 "zoned": false, 00:21:11.992 "supported_io_types": { 00:21:11.992 "read": true, 00:21:11.992 "write": true, 00:21:11.992 "unmap": false, 00:21:11.992 "flush": false, 00:21:11.992 "reset": true, 00:21:11.992 "nvme_admin": false, 00:21:11.992 "nvme_io": false, 00:21:11.992 "nvme_io_md": false, 00:21:11.992 "write_zeroes": true, 00:21:11.992 "zcopy": false, 00:21:11.992 "get_zone_info": false, 00:21:11.992 "zone_management": false, 00:21:11.992 "zone_append": false, 00:21:11.992 "compare": false, 00:21:11.992 "compare_and_write": false, 00:21:11.992 "abort": false, 00:21:11.992 "seek_hole": false, 00:21:11.992 "seek_data": false, 00:21:11.992 "copy": false, 00:21:11.992 "nvme_iov_md": false 00:21:11.992 }, 00:21:11.992 "memory_domains": [ 00:21:11.992 { 00:21:11.992 "dma_device_id": "system", 00:21:11.992 "dma_device_type": 1 00:21:11.992 }, 00:21:11.992 { 00:21:11.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.992 "dma_device_type": 2 00:21:11.992 }, 00:21:11.992 { 00:21:11.992 "dma_device_id": "system", 00:21:11.992 "dma_device_type": 1 00:21:11.992 }, 00:21:11.992 { 00:21:11.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.992 "dma_device_type": 2 00:21:11.992 } 00:21:11.992 ], 00:21:11.992 "driver_specific": { 00:21:11.992 "raid": { 00:21:11.992 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:11.992 "strip_size_kb": 0, 00:21:11.992 "state": "online", 00:21:11.992 "raid_level": "raid1", 00:21:11.992 "superblock": true, 00:21:11.992 "num_base_bdevs": 2, 00:21:11.992 "num_base_bdevs_discovered": 2, 00:21:11.992 "num_base_bdevs_operational": 2, 00:21:11.992 "base_bdevs_list": [ 00:21:11.992 { 00:21:11.992 "name": "pt1", 00:21:11.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.992 "is_configured": true, 00:21:11.992 "data_offset": 256, 00:21:11.992 "data_size": 7936 00:21:11.992 }, 00:21:11.992 { 00:21:11.992 "name": "pt2", 00:21:11.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.992 "is_configured": true, 00:21:11.992 "data_offset": 256, 00:21:11.992 "data_size": 7936 00:21:11.992 } 00:21:11.992 ] 00:21:11.992 } 00:21:11.992 } 00:21:11.992 }' 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:11.992 pt2' 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.992 06:30:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.992 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:11.992 [2024-11-26 06:30:56.080076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.993 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.993 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=be4581c5-8951-4736-ad07-dd2c3b884356 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z be4581c5-8951-4736-ad07-dd2c3b884356 ']' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 [2024-11-26 06:30:56.131704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.254 [2024-11-26 06:30:56.131732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:12.254 [2024-11-26 06:30:56.131833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.254 [2024-11-26 06:30:56.131901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.254 [2024-11-26 06:30:56.131915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 [2024-11-26 06:30:56.263488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:12.254 [2024-11-26 06:30:56.265640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:12.254 [2024-11-26 06:30:56.265722] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:12.254 [2024-11-26 06:30:56.265780] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:12.254 [2024-11-26 06:30:56.265796] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:12.254 [2024-11-26 06:30:56.265807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:12.254 request: 00:21:12.254 { 00:21:12.254 "name": "raid_bdev1", 00:21:12.254 "raid_level": "raid1", 00:21:12.254 "base_bdevs": [ 00:21:12.254 "malloc1", 00:21:12.254 "malloc2" 00:21:12.254 ], 00:21:12.254 "superblock": false, 00:21:12.254 "method": "bdev_raid_create", 00:21:12.254 "req_id": 1 00:21:12.254 } 00:21:12.254 Got JSON-RPC error response 00:21:12.254 response: 00:21:12.254 { 00:21:12.254 "code": -17, 00:21:12.254 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:12.254 } 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.254 [2024-11-26 06:30:56.315367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:12.254 [2024-11-26 06:30:56.315460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.254 [2024-11-26 06:30:56.315494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:12.254 [2024-11-26 06:30:56.315524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.254 [2024-11-26 06:30:56.317840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.254 [2024-11-26 06:30:56.317914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:12.254 [2024-11-26 06:30:56.318000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:12.254 [2024-11-26 06:30:56.318084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:12.254 pt1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.254 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.255 "name": "raid_bdev1", 00:21:12.255 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:12.255 "strip_size_kb": 0, 00:21:12.255 "state": "configuring", 00:21:12.255 "raid_level": "raid1", 00:21:12.255 "superblock": true, 00:21:12.255 "num_base_bdevs": 2, 00:21:12.255 "num_base_bdevs_discovered": 1, 00:21:12.255 "num_base_bdevs_operational": 2, 00:21:12.255 "base_bdevs_list": [ 00:21:12.255 { 00:21:12.255 "name": "pt1", 00:21:12.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:12.255 "is_configured": true, 00:21:12.255 "data_offset": 256, 00:21:12.255 "data_size": 7936 00:21:12.255 }, 00:21:12.255 { 00:21:12.255 "name": null, 00:21:12.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:12.255 "is_configured": false, 00:21:12.255 "data_offset": 256, 00:21:12.255 "data_size": 7936 00:21:12.255 } 00:21:12.255 ] 00:21:12.255 }' 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.255 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.824 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:12.824 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:12.824 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:12.824 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:12.824 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.824 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.824 [2024-11-26 06:30:56.790550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:12.824 [2024-11-26 06:30:56.790662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.824 [2024-11-26 06:30:56.790687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:12.824 [2024-11-26 06:30:56.790699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.824 [2024-11-26 06:30:56.790921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.824 [2024-11-26 06:30:56.790937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:12.824 [2024-11-26 06:30:56.790980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:12.824 [2024-11-26 06:30:56.791001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:12.824 [2024-11-26 06:30:56.791125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:12.824 [2024-11-26 06:30:56.791138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:12.824 [2024-11-26 06:30:56.791211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:12.824 [2024-11-26 06:30:56.791329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:12.824 [2024-11-26 06:30:56.791337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:12.824 [2024-11-26 06:30:56.791436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.825 pt2 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.825 "name": "raid_bdev1", 00:21:12.825 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:12.825 "strip_size_kb": 0, 00:21:12.825 "state": "online", 00:21:12.825 "raid_level": "raid1", 00:21:12.825 "superblock": true, 00:21:12.825 "num_base_bdevs": 2, 00:21:12.825 "num_base_bdevs_discovered": 2, 00:21:12.825 "num_base_bdevs_operational": 2, 00:21:12.825 "base_bdevs_list": [ 00:21:12.825 { 00:21:12.825 "name": "pt1", 00:21:12.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:12.825 "is_configured": true, 00:21:12.825 "data_offset": 256, 00:21:12.825 "data_size": 7936 00:21:12.825 }, 00:21:12.825 { 00:21:12.825 "name": "pt2", 00:21:12.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:12.825 "is_configured": true, 00:21:12.825 "data_offset": 256, 00:21:12.825 "data_size": 7936 00:21:12.825 } 00:21:12.825 ] 00:21:12.825 }' 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.825 06:30:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.397 [2024-11-26 06:30:57.286107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:13.397 "name": "raid_bdev1", 00:21:13.397 "aliases": [ 00:21:13.397 "be4581c5-8951-4736-ad07-dd2c3b884356" 00:21:13.397 ], 00:21:13.397 "product_name": "Raid Volume", 00:21:13.397 "block_size": 4096, 00:21:13.397 "num_blocks": 7936, 00:21:13.397 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:13.397 "md_size": 32, 00:21:13.397 "md_interleave": false, 00:21:13.397 "dif_type": 0, 00:21:13.397 "assigned_rate_limits": { 00:21:13.397 "rw_ios_per_sec": 0, 00:21:13.397 "rw_mbytes_per_sec": 0, 00:21:13.397 "r_mbytes_per_sec": 0, 00:21:13.397 "w_mbytes_per_sec": 0 00:21:13.397 }, 00:21:13.397 "claimed": false, 00:21:13.397 "zoned": false, 00:21:13.397 "supported_io_types": { 00:21:13.397 "read": true, 00:21:13.397 "write": true, 00:21:13.397 "unmap": false, 00:21:13.397 "flush": false, 00:21:13.397 "reset": true, 00:21:13.397 "nvme_admin": false, 00:21:13.397 "nvme_io": false, 00:21:13.397 "nvme_io_md": false, 00:21:13.397 "write_zeroes": true, 00:21:13.397 "zcopy": false, 00:21:13.397 "get_zone_info": false, 00:21:13.397 "zone_management": false, 00:21:13.397 "zone_append": false, 00:21:13.397 "compare": false, 00:21:13.397 "compare_and_write": false, 00:21:13.397 "abort": false, 00:21:13.397 "seek_hole": false, 00:21:13.397 "seek_data": false, 00:21:13.397 "copy": false, 00:21:13.397 "nvme_iov_md": false 00:21:13.397 }, 00:21:13.397 "memory_domains": [ 00:21:13.397 { 00:21:13.397 "dma_device_id": "system", 00:21:13.397 "dma_device_type": 1 00:21:13.397 }, 00:21:13.397 { 00:21:13.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.397 "dma_device_type": 2 00:21:13.397 }, 00:21:13.397 { 00:21:13.397 "dma_device_id": "system", 00:21:13.397 "dma_device_type": 1 00:21:13.397 }, 00:21:13.397 { 00:21:13.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.397 "dma_device_type": 2 00:21:13.397 } 00:21:13.397 ], 00:21:13.397 "driver_specific": { 00:21:13.397 "raid": { 00:21:13.397 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:13.397 "strip_size_kb": 0, 00:21:13.397 "state": "online", 00:21:13.397 "raid_level": "raid1", 00:21:13.397 "superblock": true, 00:21:13.397 "num_base_bdevs": 2, 00:21:13.397 "num_base_bdevs_discovered": 2, 00:21:13.397 "num_base_bdevs_operational": 2, 00:21:13.397 "base_bdevs_list": [ 00:21:13.397 { 00:21:13.397 "name": "pt1", 00:21:13.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:13.397 "is_configured": true, 00:21:13.397 "data_offset": 256, 00:21:13.397 "data_size": 7936 00:21:13.397 }, 00:21:13.397 { 00:21:13.397 "name": "pt2", 00:21:13.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:13.397 "is_configured": true, 00:21:13.397 "data_offset": 256, 00:21:13.397 "data_size": 7936 00:21:13.397 } 00:21:13.397 ] 00:21:13.397 } 00:21:13.397 } 00:21:13.397 }' 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:13.397 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:13.398 pt2' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:13.398 [2024-11-26 06:30:57.509655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.398 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' be4581c5-8951-4736-ad07-dd2c3b884356 '!=' be4581c5-8951-4736-ad07-dd2c3b884356 ']' 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.658 [2024-11-26 06:30:57.557344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.658 "name": "raid_bdev1", 00:21:13.658 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:13.658 "strip_size_kb": 0, 00:21:13.658 "state": "online", 00:21:13.658 "raid_level": "raid1", 00:21:13.658 "superblock": true, 00:21:13.658 "num_base_bdevs": 2, 00:21:13.658 "num_base_bdevs_discovered": 1, 00:21:13.658 "num_base_bdevs_operational": 1, 00:21:13.658 "base_bdevs_list": [ 00:21:13.658 { 00:21:13.658 "name": null, 00:21:13.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.658 "is_configured": false, 00:21:13.658 "data_offset": 0, 00:21:13.658 "data_size": 7936 00:21:13.658 }, 00:21:13.658 { 00:21:13.658 "name": "pt2", 00:21:13.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:13.658 "is_configured": true, 00:21:13.658 "data_offset": 256, 00:21:13.658 "data_size": 7936 00:21:13.658 } 00:21:13.658 ] 00:21:13.658 }' 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.658 06:30:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.918 [2024-11-26 06:30:58.028535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:13.918 [2024-11-26 06:30:58.028613] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:13.918 [2024-11-26 06:30:58.028721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:13.918 [2024-11-26 06:30:58.028833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:13.918 [2024-11-26 06:30:58.028883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:13.918 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.178 [2024-11-26 06:30:58.104464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:14.178 [2024-11-26 06:30:58.104539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.178 [2024-11-26 06:30:58.104560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:14.178 [2024-11-26 06:30:58.104571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.178 [2024-11-26 06:30:58.107007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.178 [2024-11-26 06:30:58.107060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:14.178 [2024-11-26 06:30:58.107123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:14.178 [2024-11-26 06:30:58.107182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:14.178 [2024-11-26 06:30:58.107287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:14.178 [2024-11-26 06:30:58.107300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:14.178 [2024-11-26 06:30:58.107388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:14.178 [2024-11-26 06:30:58.107521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:14.178 [2024-11-26 06:30:58.107528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:14.178 [2024-11-26 06:30:58.107646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.178 pt2 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.178 "name": "raid_bdev1", 00:21:14.178 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:14.178 "strip_size_kb": 0, 00:21:14.178 "state": "online", 00:21:14.178 "raid_level": "raid1", 00:21:14.178 "superblock": true, 00:21:14.178 "num_base_bdevs": 2, 00:21:14.178 "num_base_bdevs_discovered": 1, 00:21:14.178 "num_base_bdevs_operational": 1, 00:21:14.178 "base_bdevs_list": [ 00:21:14.178 { 00:21:14.178 "name": null, 00:21:14.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.178 "is_configured": false, 00:21:14.178 "data_offset": 256, 00:21:14.178 "data_size": 7936 00:21:14.178 }, 00:21:14.178 { 00:21:14.178 "name": "pt2", 00:21:14.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.178 "is_configured": true, 00:21:14.178 "data_offset": 256, 00:21:14.178 "data_size": 7936 00:21:14.178 } 00:21:14.178 ] 00:21:14.178 }' 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.178 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.439 [2024-11-26 06:30:58.527705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.439 [2024-11-26 06:30:58.527794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.439 [2024-11-26 06:30:58.527916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.439 [2024-11-26 06:30:58.528040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.439 [2024-11-26 06:30:58.528106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.439 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.697 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.698 [2024-11-26 06:30:58.587616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:14.698 [2024-11-26 06:30:58.587728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.698 [2024-11-26 06:30:58.587770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:14.698 [2024-11-26 06:30:58.587802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.698 [2024-11-26 06:30:58.590377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.698 [2024-11-26 06:30:58.590450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:14.698 [2024-11-26 06:30:58.590533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:14.698 [2024-11-26 06:30:58.590599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:14.698 [2024-11-26 06:30:58.590812] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:14.698 [2024-11-26 06:30:58.590865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.698 [2024-11-26 06:30:58.590918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:14.698 [2024-11-26 06:30:58.591042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:14.698 [2024-11-26 06:30:58.591171] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:14.698 [2024-11-26 06:30:58.591208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:14.698 [2024-11-26 06:30:58.591326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:14.698 [2024-11-26 06:30:58.591471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:14.698 [2024-11-26 06:30:58.591510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:14.698 [2024-11-26 06:30:58.591715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.698 pt1 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:14.698 "name": "raid_bdev1", 00:21:14.698 "uuid": "be4581c5-8951-4736-ad07-dd2c3b884356", 00:21:14.698 "strip_size_kb": 0, 00:21:14.698 "state": "online", 00:21:14.698 "raid_level": "raid1", 00:21:14.698 "superblock": true, 00:21:14.698 "num_base_bdevs": 2, 00:21:14.698 "num_base_bdevs_discovered": 1, 00:21:14.698 "num_base_bdevs_operational": 1, 00:21:14.698 "base_bdevs_list": [ 00:21:14.698 { 00:21:14.698 "name": null, 00:21:14.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.698 "is_configured": false, 00:21:14.698 "data_offset": 256, 00:21:14.698 "data_size": 7936 00:21:14.698 }, 00:21:14.698 { 00:21:14.698 "name": "pt2", 00:21:14.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.698 "is_configured": true, 00:21:14.698 "data_offset": 256, 00:21:14.698 "data_size": 7936 00:21:14.698 } 00:21:14.698 ] 00:21:14.698 }' 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:14.698 06:30:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.957 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:15.225 [2024-11-26 06:30:59.095092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' be4581c5-8951-4736-ad07-dd2c3b884356 '!=' be4581c5-8951-4736-ad07-dd2c3b884356 ']' 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 88010 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88010 ']' 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 88010 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88010 00:21:15.225 killing process with pid 88010 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88010' 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 88010 00:21:15.225 06:30:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 88010 00:21:15.225 [2024-11-26 06:30:59.180240] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:15.225 [2024-11-26 06:30:59.180349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.225 [2024-11-26 06:30:59.180419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.225 [2024-11-26 06:30:59.180440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:15.485 [2024-11-26 06:30:59.424701] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:16.868 06:31:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:16.869 00:21:16.869 real 0m6.401s 00:21:16.869 user 0m9.536s 00:21:16.869 sys 0m1.223s 00:21:16.869 ************************************ 00:21:16.869 END TEST raid_superblock_test_md_separate 00:21:16.869 ************************************ 00:21:16.869 06:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.869 06:31:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.869 06:31:00 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:16.869 06:31:00 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:16.869 06:31:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:16.869 06:31:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.869 06:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:16.869 ************************************ 00:21:16.869 START TEST raid_rebuild_test_sb_md_separate 00:21:16.869 ************************************ 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88337 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88337 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88337 ']' 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.869 06:31:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.869 [2024-11-26 06:31:00.815221] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:16.869 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:16.869 Zero copy mechanism will not be used. 00:21:16.869 [2024-11-26 06:31:00.815407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88337 ] 00:21:16.869 [2024-11-26 06:31:00.991287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.129 [2024-11-26 06:31:01.128822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.389 [2024-11-26 06:31:01.370132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.389 [2024-11-26 06:31:01.370193] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.650 BaseBdev1_malloc 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.650 [2024-11-26 06:31:01.712713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:17.650 [2024-11-26 06:31:01.712833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.650 [2024-11-26 06:31:01.712875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:17.650 [2024-11-26 06:31:01.712889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.650 [2024-11-26 06:31:01.715230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.650 [2024-11-26 06:31:01.715265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:17.650 BaseBdev1 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.650 BaseBdev2_malloc 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.650 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.650 [2024-11-26 06:31:01.776827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:17.650 [2024-11-26 06:31:01.776944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.650 [2024-11-26 06:31:01.776983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:17.650 [2024-11-26 06:31:01.777015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.650 [2024-11-26 06:31:01.779400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.650 [2024-11-26 06:31:01.779488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:17.650 BaseBdev2 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.911 spare_malloc 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.911 spare_delay 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.911 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.912 [2024-11-26 06:31:01.866686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:17.912 [2024-11-26 06:31:01.866764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.912 [2024-11-26 06:31:01.866786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:17.912 [2024-11-26 06:31:01.866799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.912 [2024-11-26 06:31:01.869077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.912 [2024-11-26 06:31:01.869116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:17.912 spare 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.912 [2024-11-26 06:31:01.878706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.912 [2024-11-26 06:31:01.880869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.912 [2024-11-26 06:31:01.881115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:17.912 [2024-11-26 06:31:01.881136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:17.912 [2024-11-26 06:31:01.881215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:17.912 [2024-11-26 06:31:01.881349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:17.912 [2024-11-26 06:31:01.881358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:17.912 [2024-11-26 06:31:01.881462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.912 "name": "raid_bdev1", 00:21:17.912 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:17.912 "strip_size_kb": 0, 00:21:17.912 "state": "online", 00:21:17.912 "raid_level": "raid1", 00:21:17.912 "superblock": true, 00:21:17.912 "num_base_bdevs": 2, 00:21:17.912 "num_base_bdevs_discovered": 2, 00:21:17.912 "num_base_bdevs_operational": 2, 00:21:17.912 "base_bdevs_list": [ 00:21:17.912 { 00:21:17.912 "name": "BaseBdev1", 00:21:17.912 "uuid": "32a990f2-2cee-5c40-8dad-fb5107c1689b", 00:21:17.912 "is_configured": true, 00:21:17.912 "data_offset": 256, 00:21:17.912 "data_size": 7936 00:21:17.912 }, 00:21:17.912 { 00:21:17.912 "name": "BaseBdev2", 00:21:17.912 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:17.912 "is_configured": true, 00:21:17.912 "data_offset": 256, 00:21:17.912 "data_size": 7936 00:21:17.912 } 00:21:17.912 ] 00:21:17.912 }' 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.912 06:31:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.482 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.483 [2024-11-26 06:31:02.330256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:18.483 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:18.483 [2024-11-26 06:31:02.589583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:18.483 /dev/nbd0 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.743 1+0 records in 00:21:18.743 1+0 records out 00:21:18.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435453 s, 9.4 MB/s 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:18.743 06:31:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:19.312 7936+0 records in 00:21:19.312 7936+0 records out 00:21:19.312 32505856 bytes (33 MB, 31 MiB) copied, 0.676055 s, 48.1 MB/s 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:19.312 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:19.571 [2024-11-26 06:31:03.552546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.571 [2024-11-26 06:31:03.568821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.571 "name": "raid_bdev1", 00:21:19.571 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:19.571 "strip_size_kb": 0, 00:21:19.571 "state": "online", 00:21:19.571 "raid_level": "raid1", 00:21:19.571 "superblock": true, 00:21:19.571 "num_base_bdevs": 2, 00:21:19.571 "num_base_bdevs_discovered": 1, 00:21:19.571 "num_base_bdevs_operational": 1, 00:21:19.571 "base_bdevs_list": [ 00:21:19.571 { 00:21:19.571 "name": null, 00:21:19.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.571 "is_configured": false, 00:21:19.571 "data_offset": 0, 00:21:19.571 "data_size": 7936 00:21:19.571 }, 00:21:19.571 { 00:21:19.571 "name": "BaseBdev2", 00:21:19.571 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:19.571 "is_configured": true, 00:21:19.571 "data_offset": 256, 00:21:19.571 "data_size": 7936 00:21:19.571 } 00:21:19.571 ] 00:21:19.571 }' 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.571 06:31:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:20.140 06:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.140 06:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.140 06:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:20.140 [2024-11-26 06:31:04.048119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.140 [2024-11-26 06:31:04.064555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:20.140 06:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.140 06:31:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:20.140 [2024-11-26 06:31:04.067161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.077 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.078 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.078 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.078 "name": "raid_bdev1", 00:21:21.078 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:21.078 "strip_size_kb": 0, 00:21:21.078 "state": "online", 00:21:21.078 "raid_level": "raid1", 00:21:21.078 "superblock": true, 00:21:21.078 "num_base_bdevs": 2, 00:21:21.078 "num_base_bdevs_discovered": 2, 00:21:21.078 "num_base_bdevs_operational": 2, 00:21:21.078 "process": { 00:21:21.078 "type": "rebuild", 00:21:21.078 "target": "spare", 00:21:21.078 "progress": { 00:21:21.078 "blocks": 2560, 00:21:21.078 "percent": 32 00:21:21.078 } 00:21:21.078 }, 00:21:21.078 "base_bdevs_list": [ 00:21:21.078 { 00:21:21.078 "name": "spare", 00:21:21.078 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:21.078 "is_configured": true, 00:21:21.078 "data_offset": 256, 00:21:21.078 "data_size": 7936 00:21:21.078 }, 00:21:21.078 { 00:21:21.078 "name": "BaseBdev2", 00:21:21.078 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:21.078 "is_configured": true, 00:21:21.078 "data_offset": 256, 00:21:21.078 "data_size": 7936 00:21:21.078 } 00:21:21.078 ] 00:21:21.078 }' 00:21:21.078 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.078 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.078 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.337 [2024-11-26 06:31:05.231515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.337 [2024-11-26 06:31:05.276366] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.337 [2024-11-26 06:31:05.276493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.337 [2024-11-26 06:31:05.276513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.337 [2024-11-26 06:31:05.276525] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.337 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.338 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.338 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.338 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.338 "name": "raid_bdev1", 00:21:21.338 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:21.338 "strip_size_kb": 0, 00:21:21.338 "state": "online", 00:21:21.338 "raid_level": "raid1", 00:21:21.338 "superblock": true, 00:21:21.338 "num_base_bdevs": 2, 00:21:21.338 "num_base_bdevs_discovered": 1, 00:21:21.338 "num_base_bdevs_operational": 1, 00:21:21.338 "base_bdevs_list": [ 00:21:21.338 { 00:21:21.338 "name": null, 00:21:21.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.338 "is_configured": false, 00:21:21.338 "data_offset": 0, 00:21:21.338 "data_size": 7936 00:21:21.338 }, 00:21:21.338 { 00:21:21.338 "name": "BaseBdev2", 00:21:21.338 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:21.338 "is_configured": true, 00:21:21.338 "data_offset": 256, 00:21:21.338 "data_size": 7936 00:21:21.338 } 00:21:21.338 ] 00:21:21.338 }' 00:21:21.338 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.338 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.907 "name": "raid_bdev1", 00:21:21.907 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:21.907 "strip_size_kb": 0, 00:21:21.907 "state": "online", 00:21:21.907 "raid_level": "raid1", 00:21:21.907 "superblock": true, 00:21:21.907 "num_base_bdevs": 2, 00:21:21.907 "num_base_bdevs_discovered": 1, 00:21:21.907 "num_base_bdevs_operational": 1, 00:21:21.907 "base_bdevs_list": [ 00:21:21.907 { 00:21:21.907 "name": null, 00:21:21.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.907 "is_configured": false, 00:21:21.907 "data_offset": 0, 00:21:21.907 "data_size": 7936 00:21:21.907 }, 00:21:21.907 { 00:21:21.907 "name": "BaseBdev2", 00:21:21.907 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:21.907 "is_configured": true, 00:21:21.907 "data_offset": 256, 00:21:21.907 "data_size": 7936 00:21:21.907 } 00:21:21.907 ] 00:21:21.907 }' 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.907 [2024-11-26 06:31:05.905658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:21.907 [2024-11-26 06:31:05.919425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.907 06:31:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:21.907 [2024-11-26 06:31:05.921639] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.846 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.105 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.105 "name": "raid_bdev1", 00:21:23.105 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:23.105 "strip_size_kb": 0, 00:21:23.105 "state": "online", 00:21:23.105 "raid_level": "raid1", 00:21:23.105 "superblock": true, 00:21:23.105 "num_base_bdevs": 2, 00:21:23.105 "num_base_bdevs_discovered": 2, 00:21:23.105 "num_base_bdevs_operational": 2, 00:21:23.105 "process": { 00:21:23.105 "type": "rebuild", 00:21:23.105 "target": "spare", 00:21:23.105 "progress": { 00:21:23.105 "blocks": 2560, 00:21:23.105 "percent": 32 00:21:23.105 } 00:21:23.105 }, 00:21:23.105 "base_bdevs_list": [ 00:21:23.105 { 00:21:23.105 "name": "spare", 00:21:23.105 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:23.105 "is_configured": true, 00:21:23.105 "data_offset": 256, 00:21:23.105 "data_size": 7936 00:21:23.105 }, 00:21:23.105 { 00:21:23.105 "name": "BaseBdev2", 00:21:23.105 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:23.105 "is_configured": true, 00:21:23.105 "data_offset": 256, 00:21:23.105 "data_size": 7936 00:21:23.105 } 00:21:23.105 ] 00:21:23.105 }' 00:21:23.105 06:31:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:23.105 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=740 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.105 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.105 "name": "raid_bdev1", 00:21:23.105 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:23.105 "strip_size_kb": 0, 00:21:23.105 "state": "online", 00:21:23.105 "raid_level": "raid1", 00:21:23.105 "superblock": true, 00:21:23.105 "num_base_bdevs": 2, 00:21:23.105 "num_base_bdevs_discovered": 2, 00:21:23.105 "num_base_bdevs_operational": 2, 00:21:23.105 "process": { 00:21:23.105 "type": "rebuild", 00:21:23.105 "target": "spare", 00:21:23.105 "progress": { 00:21:23.105 "blocks": 2816, 00:21:23.105 "percent": 35 00:21:23.105 } 00:21:23.105 }, 00:21:23.105 "base_bdevs_list": [ 00:21:23.105 { 00:21:23.105 "name": "spare", 00:21:23.105 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:23.106 "is_configured": true, 00:21:23.106 "data_offset": 256, 00:21:23.106 "data_size": 7936 00:21:23.106 }, 00:21:23.106 { 00:21:23.106 "name": "BaseBdev2", 00:21:23.106 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:23.106 "is_configured": true, 00:21:23.106 "data_offset": 256, 00:21:23.106 "data_size": 7936 00:21:23.106 } 00:21:23.106 ] 00:21:23.106 }' 00:21:23.106 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.106 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.106 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.106 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.106 06:31:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.486 "name": "raid_bdev1", 00:21:24.486 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:24.486 "strip_size_kb": 0, 00:21:24.486 "state": "online", 00:21:24.486 "raid_level": "raid1", 00:21:24.486 "superblock": true, 00:21:24.486 "num_base_bdevs": 2, 00:21:24.486 "num_base_bdevs_discovered": 2, 00:21:24.486 "num_base_bdevs_operational": 2, 00:21:24.486 "process": { 00:21:24.486 "type": "rebuild", 00:21:24.486 "target": "spare", 00:21:24.486 "progress": { 00:21:24.486 "blocks": 5888, 00:21:24.486 "percent": 74 00:21:24.486 } 00:21:24.486 }, 00:21:24.486 "base_bdevs_list": [ 00:21:24.486 { 00:21:24.486 "name": "spare", 00:21:24.486 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:24.486 "is_configured": true, 00:21:24.486 "data_offset": 256, 00:21:24.486 "data_size": 7936 00:21:24.486 }, 00:21:24.486 { 00:21:24.486 "name": "BaseBdev2", 00:21:24.486 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:24.486 "is_configured": true, 00:21:24.486 "data_offset": 256, 00:21:24.486 "data_size": 7936 00:21:24.486 } 00:21:24.486 ] 00:21:24.486 }' 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.486 06:31:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:25.056 [2024-11-26 06:31:09.045262] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:25.056 [2024-11-26 06:31:09.045353] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:25.056 [2024-11-26 06:31:09.045482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.316 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.317 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.317 "name": "raid_bdev1", 00:21:25.317 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:25.317 "strip_size_kb": 0, 00:21:25.317 "state": "online", 00:21:25.317 "raid_level": "raid1", 00:21:25.317 "superblock": true, 00:21:25.317 "num_base_bdevs": 2, 00:21:25.317 "num_base_bdevs_discovered": 2, 00:21:25.317 "num_base_bdevs_operational": 2, 00:21:25.317 "base_bdevs_list": [ 00:21:25.317 { 00:21:25.317 "name": "spare", 00:21:25.317 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:25.317 "is_configured": true, 00:21:25.317 "data_offset": 256, 00:21:25.317 "data_size": 7936 00:21:25.317 }, 00:21:25.317 { 00:21:25.317 "name": "BaseBdev2", 00:21:25.317 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:25.317 "is_configured": true, 00:21:25.317 "data_offset": 256, 00:21:25.317 "data_size": 7936 00:21:25.317 } 00:21:25.317 ] 00:21:25.317 }' 00:21:25.317 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.577 "name": "raid_bdev1", 00:21:25.577 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:25.577 "strip_size_kb": 0, 00:21:25.577 "state": "online", 00:21:25.577 "raid_level": "raid1", 00:21:25.577 "superblock": true, 00:21:25.577 "num_base_bdevs": 2, 00:21:25.577 "num_base_bdevs_discovered": 2, 00:21:25.577 "num_base_bdevs_operational": 2, 00:21:25.577 "base_bdevs_list": [ 00:21:25.577 { 00:21:25.577 "name": "spare", 00:21:25.577 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:25.577 "is_configured": true, 00:21:25.577 "data_offset": 256, 00:21:25.577 "data_size": 7936 00:21:25.577 }, 00:21:25.577 { 00:21:25.577 "name": "BaseBdev2", 00:21:25.577 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:25.577 "is_configured": true, 00:21:25.577 "data_offset": 256, 00:21:25.577 "data_size": 7936 00:21:25.577 } 00:21:25.577 ] 00:21:25.577 }' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.577 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.838 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.838 "name": "raid_bdev1", 00:21:25.838 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:25.838 "strip_size_kb": 0, 00:21:25.838 "state": "online", 00:21:25.838 "raid_level": "raid1", 00:21:25.838 "superblock": true, 00:21:25.838 "num_base_bdevs": 2, 00:21:25.838 "num_base_bdevs_discovered": 2, 00:21:25.838 "num_base_bdevs_operational": 2, 00:21:25.838 "base_bdevs_list": [ 00:21:25.838 { 00:21:25.838 "name": "spare", 00:21:25.838 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:25.838 "is_configured": true, 00:21:25.838 "data_offset": 256, 00:21:25.838 "data_size": 7936 00:21:25.838 }, 00:21:25.838 { 00:21:25.838 "name": "BaseBdev2", 00:21:25.838 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:25.838 "is_configured": true, 00:21:25.838 "data_offset": 256, 00:21:25.838 "data_size": 7936 00:21:25.838 } 00:21:25.838 ] 00:21:25.838 }' 00:21:25.838 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.838 06:31:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 [2024-11-26 06:31:10.093528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:26.099 [2024-11-26 06:31:10.093567] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:26.099 [2024-11-26 06:31:10.093679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:26.099 [2024-11-26 06:31:10.093772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:26.099 [2024-11-26 06:31:10.093782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:26.099 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:26.359 /dev/nbd0 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.359 1+0 records in 00:21:26.359 1+0 records out 00:21:26.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206577 s, 19.8 MB/s 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:26.359 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:26.618 /dev/nbd1 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:26.618 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:26.619 1+0 records in 00:21:26.619 1+0 records out 00:21:26.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555283 s, 7.4 MB/s 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:26.619 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.877 06:31:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:27.136 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.396 [2024-11-26 06:31:11.334454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:27.396 [2024-11-26 06:31:11.334586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.396 [2024-11-26 06:31:11.334619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:27.396 [2024-11-26 06:31:11.334629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.396 [2024-11-26 06:31:11.337048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.396 [2024-11-26 06:31:11.337093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:27.396 [2024-11-26 06:31:11.337180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:27.396 [2024-11-26 06:31:11.337252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.396 [2024-11-26 06:31:11.337418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:27.396 spare 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.396 [2024-11-26 06:31:11.437351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:27.396 [2024-11-26 06:31:11.437429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:27.396 [2024-11-26 06:31:11.437607] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:27.396 [2024-11-26 06:31:11.437804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:27.396 [2024-11-26 06:31:11.437816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:27.396 [2024-11-26 06:31:11.437994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.396 "name": "raid_bdev1", 00:21:27.396 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:27.396 "strip_size_kb": 0, 00:21:27.396 "state": "online", 00:21:27.396 "raid_level": "raid1", 00:21:27.396 "superblock": true, 00:21:27.396 "num_base_bdevs": 2, 00:21:27.396 "num_base_bdevs_discovered": 2, 00:21:27.396 "num_base_bdevs_operational": 2, 00:21:27.396 "base_bdevs_list": [ 00:21:27.396 { 00:21:27.396 "name": "spare", 00:21:27.396 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:27.396 "is_configured": true, 00:21:27.396 "data_offset": 256, 00:21:27.396 "data_size": 7936 00:21:27.396 }, 00:21:27.396 { 00:21:27.396 "name": "BaseBdev2", 00:21:27.396 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:27.396 "is_configured": true, 00:21:27.396 "data_offset": 256, 00:21:27.396 "data_size": 7936 00:21:27.396 } 00:21:27.396 ] 00:21:27.396 }' 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.396 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.965 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.965 "name": "raid_bdev1", 00:21:27.965 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:27.966 "strip_size_kb": 0, 00:21:27.966 "state": "online", 00:21:27.966 "raid_level": "raid1", 00:21:27.966 "superblock": true, 00:21:27.966 "num_base_bdevs": 2, 00:21:27.966 "num_base_bdevs_discovered": 2, 00:21:27.966 "num_base_bdevs_operational": 2, 00:21:27.966 "base_bdevs_list": [ 00:21:27.966 { 00:21:27.966 "name": "spare", 00:21:27.966 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:27.966 "is_configured": true, 00:21:27.966 "data_offset": 256, 00:21:27.966 "data_size": 7936 00:21:27.966 }, 00:21:27.966 { 00:21:27.966 "name": "BaseBdev2", 00:21:27.966 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:27.966 "is_configured": true, 00:21:27.966 "data_offset": 256, 00:21:27.966 "data_size": 7936 00:21:27.966 } 00:21:27.966 ] 00:21:27.966 }' 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.966 06:31:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.966 [2024-11-26 06:31:12.001389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.966 "name": "raid_bdev1", 00:21:27.966 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:27.966 "strip_size_kb": 0, 00:21:27.966 "state": "online", 00:21:27.966 "raid_level": "raid1", 00:21:27.966 "superblock": true, 00:21:27.966 "num_base_bdevs": 2, 00:21:27.966 "num_base_bdevs_discovered": 1, 00:21:27.966 "num_base_bdevs_operational": 1, 00:21:27.966 "base_bdevs_list": [ 00:21:27.966 { 00:21:27.966 "name": null, 00:21:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.966 "is_configured": false, 00:21:27.966 "data_offset": 0, 00:21:27.966 "data_size": 7936 00:21:27.966 }, 00:21:27.966 { 00:21:27.966 "name": "BaseBdev2", 00:21:27.966 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:27.966 "is_configured": true, 00:21:27.966 "data_offset": 256, 00:21:27.966 "data_size": 7936 00:21:27.966 } 00:21:27.966 ] 00:21:27.966 }' 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.966 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.538 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.538 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.538 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.538 [2024-11-26 06:31:12.444660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.538 [2024-11-26 06:31:12.444991] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:28.538 [2024-11-26 06:31:12.445066] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:28.538 [2024-11-26 06:31:12.445156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.538 [2024-11-26 06:31:12.459198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:28.538 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.538 06:31:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:28.538 [2024-11-26 06:31:12.461428] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.487 "name": "raid_bdev1", 00:21:29.487 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:29.487 "strip_size_kb": 0, 00:21:29.487 "state": "online", 00:21:29.487 "raid_level": "raid1", 00:21:29.487 "superblock": true, 00:21:29.487 "num_base_bdevs": 2, 00:21:29.487 "num_base_bdevs_discovered": 2, 00:21:29.487 "num_base_bdevs_operational": 2, 00:21:29.487 "process": { 00:21:29.487 "type": "rebuild", 00:21:29.487 "target": "spare", 00:21:29.487 "progress": { 00:21:29.487 "blocks": 2560, 00:21:29.487 "percent": 32 00:21:29.487 } 00:21:29.487 }, 00:21:29.487 "base_bdevs_list": [ 00:21:29.487 { 00:21:29.487 "name": "spare", 00:21:29.487 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:29.487 "is_configured": true, 00:21:29.487 "data_offset": 256, 00:21:29.487 "data_size": 7936 00:21:29.487 }, 00:21:29.487 { 00:21:29.487 "name": "BaseBdev2", 00:21:29.487 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:29.487 "is_configured": true, 00:21:29.487 "data_offset": 256, 00:21:29.487 "data_size": 7936 00:21:29.487 } 00:21:29.487 ] 00:21:29.487 }' 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.487 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.746 [2024-11-26 06:31:13.621814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.746 [2024-11-26 06:31:13.670527] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:29.746 [2024-11-26 06:31:13.670589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.746 [2024-11-26 06:31:13.670603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.746 [2024-11-26 06:31:13.670623] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.746 "name": "raid_bdev1", 00:21:29.746 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:29.746 "strip_size_kb": 0, 00:21:29.746 "state": "online", 00:21:29.746 "raid_level": "raid1", 00:21:29.746 "superblock": true, 00:21:29.746 "num_base_bdevs": 2, 00:21:29.746 "num_base_bdevs_discovered": 1, 00:21:29.746 "num_base_bdevs_operational": 1, 00:21:29.746 "base_bdevs_list": [ 00:21:29.746 { 00:21:29.746 "name": null, 00:21:29.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.746 "is_configured": false, 00:21:29.746 "data_offset": 0, 00:21:29.746 "data_size": 7936 00:21:29.746 }, 00:21:29.746 { 00:21:29.746 "name": "BaseBdev2", 00:21:29.746 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:29.746 "is_configured": true, 00:21:29.746 "data_offset": 256, 00:21:29.746 "data_size": 7936 00:21:29.746 } 00:21:29.746 ] 00:21:29.746 }' 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.746 06:31:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.005 06:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:30.005 06:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.005 06:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:30.005 [2024-11-26 06:31:14.123116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:30.005 [2024-11-26 06:31:14.123235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.005 [2024-11-26 06:31:14.123275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:30.005 [2024-11-26 06:31:14.123305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.005 [2024-11-26 06:31:14.123652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.005 [2024-11-26 06:31:14.123709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:30.006 [2024-11-26 06:31:14.123814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:30.006 [2024-11-26 06:31:14.123857] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:30.006 [2024-11-26 06:31:14.123903] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:30.006 [2024-11-26 06:31:14.123961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.264 [2024-11-26 06:31:14.137699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:30.264 spare 00:21:30.264 06:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.265 06:31:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:30.265 [2024-11-26 06:31:14.139953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.201 "name": "raid_bdev1", 00:21:31.201 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:31.201 "strip_size_kb": 0, 00:21:31.201 "state": "online", 00:21:31.201 "raid_level": "raid1", 00:21:31.201 "superblock": true, 00:21:31.201 "num_base_bdevs": 2, 00:21:31.201 "num_base_bdevs_discovered": 2, 00:21:31.201 "num_base_bdevs_operational": 2, 00:21:31.201 "process": { 00:21:31.201 "type": "rebuild", 00:21:31.201 "target": "spare", 00:21:31.201 "progress": { 00:21:31.201 "blocks": 2560, 00:21:31.201 "percent": 32 00:21:31.201 } 00:21:31.201 }, 00:21:31.201 "base_bdevs_list": [ 00:21:31.201 { 00:21:31.201 "name": "spare", 00:21:31.201 "uuid": "ee18c383-c390-50f9-ba29-c2e866391360", 00:21:31.201 "is_configured": true, 00:21:31.201 "data_offset": 256, 00:21:31.201 "data_size": 7936 00:21:31.201 }, 00:21:31.201 { 00:21:31.201 "name": "BaseBdev2", 00:21:31.201 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:31.201 "is_configured": true, 00:21:31.201 "data_offset": 256, 00:21:31.201 "data_size": 7936 00:21:31.201 } 00:21:31.201 ] 00:21:31.201 }' 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.201 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.201 [2024-11-26 06:31:15.292594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.460 [2024-11-26 06:31:15.349357] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:31.460 [2024-11-26 06:31:15.349421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.460 [2024-11-26 06:31:15.349440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:31.460 [2024-11-26 06:31:15.349448] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.460 "name": "raid_bdev1", 00:21:31.460 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:31.460 "strip_size_kb": 0, 00:21:31.460 "state": "online", 00:21:31.460 "raid_level": "raid1", 00:21:31.460 "superblock": true, 00:21:31.460 "num_base_bdevs": 2, 00:21:31.460 "num_base_bdevs_discovered": 1, 00:21:31.460 "num_base_bdevs_operational": 1, 00:21:31.460 "base_bdevs_list": [ 00:21:31.460 { 00:21:31.460 "name": null, 00:21:31.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.460 "is_configured": false, 00:21:31.460 "data_offset": 0, 00:21:31.460 "data_size": 7936 00:21:31.460 }, 00:21:31.460 { 00:21:31.460 "name": "BaseBdev2", 00:21:31.460 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:31.460 "is_configured": true, 00:21:31.460 "data_offset": 256, 00:21:31.460 "data_size": 7936 00:21:31.460 } 00:21:31.460 ] 00:21:31.460 }' 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.460 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.720 "name": "raid_bdev1", 00:21:31.720 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:31.720 "strip_size_kb": 0, 00:21:31.720 "state": "online", 00:21:31.720 "raid_level": "raid1", 00:21:31.720 "superblock": true, 00:21:31.720 "num_base_bdevs": 2, 00:21:31.720 "num_base_bdevs_discovered": 1, 00:21:31.720 "num_base_bdevs_operational": 1, 00:21:31.720 "base_bdevs_list": [ 00:21:31.720 { 00:21:31.720 "name": null, 00:21:31.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.720 "is_configured": false, 00:21:31.720 "data_offset": 0, 00:21:31.720 "data_size": 7936 00:21:31.720 }, 00:21:31.720 { 00:21:31.720 "name": "BaseBdev2", 00:21:31.720 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:31.720 "is_configured": true, 00:21:31.720 "data_offset": 256, 00:21:31.720 "data_size": 7936 00:21:31.720 } 00:21:31.720 ] 00:21:31.720 }' 00:21:31.720 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:31.980 [2024-11-26 06:31:15.954883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:31.980 [2024-11-26 06:31:15.954952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.980 [2024-11-26 06:31:15.954984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:31.980 [2024-11-26 06:31:15.954994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.980 [2024-11-26 06:31:15.955274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.980 [2024-11-26 06:31:15.955287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:31.980 [2024-11-26 06:31:15.955344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:31.980 [2024-11-26 06:31:15.955358] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:31.980 [2024-11-26 06:31:15.955369] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:31.980 [2024-11-26 06:31:15.955382] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:31.980 BaseBdev1 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.980 06:31:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:32.919 06:31:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.919 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.919 "name": "raid_bdev1", 00:21:32.919 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:32.919 "strip_size_kb": 0, 00:21:32.919 "state": "online", 00:21:32.919 "raid_level": "raid1", 00:21:32.919 "superblock": true, 00:21:32.919 "num_base_bdevs": 2, 00:21:32.919 "num_base_bdevs_discovered": 1, 00:21:32.919 "num_base_bdevs_operational": 1, 00:21:32.919 "base_bdevs_list": [ 00:21:32.919 { 00:21:32.919 "name": null, 00:21:32.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.919 "is_configured": false, 00:21:32.919 "data_offset": 0, 00:21:32.919 "data_size": 7936 00:21:32.919 }, 00:21:32.919 { 00:21:32.919 "name": "BaseBdev2", 00:21:32.919 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:32.919 "is_configured": true, 00:21:32.919 "data_offset": 256, 00:21:32.919 "data_size": 7936 00:21:32.919 } 00:21:32.919 ] 00:21:32.919 }' 00:21:32.919 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.919 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.489 "name": "raid_bdev1", 00:21:33.489 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:33.489 "strip_size_kb": 0, 00:21:33.489 "state": "online", 00:21:33.489 "raid_level": "raid1", 00:21:33.489 "superblock": true, 00:21:33.489 "num_base_bdevs": 2, 00:21:33.489 "num_base_bdevs_discovered": 1, 00:21:33.489 "num_base_bdevs_operational": 1, 00:21:33.489 "base_bdevs_list": [ 00:21:33.489 { 00:21:33.489 "name": null, 00:21:33.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.489 "is_configured": false, 00:21:33.489 "data_offset": 0, 00:21:33.489 "data_size": 7936 00:21:33.489 }, 00:21:33.489 { 00:21:33.489 "name": "BaseBdev2", 00:21:33.489 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:33.489 "is_configured": true, 00:21:33.489 "data_offset": 256, 00:21:33.489 "data_size": 7936 00:21:33.489 } 00:21:33.489 ] 00:21:33.489 }' 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.489 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:33.489 [2024-11-26 06:31:17.536399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.489 [2024-11-26 06:31:17.536679] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:33.489 [2024-11-26 06:31:17.536744] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:33.489 request: 00:21:33.489 { 00:21:33.489 "base_bdev": "BaseBdev1", 00:21:33.489 "raid_bdev": "raid_bdev1", 00:21:33.489 "method": "bdev_raid_add_base_bdev", 00:21:33.489 "req_id": 1 00:21:33.489 } 00:21:33.490 Got JSON-RPC error response 00:21:33.490 response: 00:21:33.490 { 00:21:33.490 "code": -22, 00:21:33.490 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:33.490 } 00:21:33.490 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:33.490 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:21:33.490 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:33.490 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:33.490 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:33.490 06:31:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.430 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.688 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.688 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.688 "name": "raid_bdev1", 00:21:34.688 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:34.688 "strip_size_kb": 0, 00:21:34.688 "state": "online", 00:21:34.688 "raid_level": "raid1", 00:21:34.689 "superblock": true, 00:21:34.689 "num_base_bdevs": 2, 00:21:34.689 "num_base_bdevs_discovered": 1, 00:21:34.689 "num_base_bdevs_operational": 1, 00:21:34.689 "base_bdevs_list": [ 00:21:34.689 { 00:21:34.689 "name": null, 00:21:34.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.689 "is_configured": false, 00:21:34.689 "data_offset": 0, 00:21:34.689 "data_size": 7936 00:21:34.689 }, 00:21:34.689 { 00:21:34.689 "name": "BaseBdev2", 00:21:34.689 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:34.689 "is_configured": true, 00:21:34.689 "data_offset": 256, 00:21:34.689 "data_size": 7936 00:21:34.689 } 00:21:34.689 ] 00:21:34.689 }' 00:21:34.689 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.689 06:31:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:34.947 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.205 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.205 "name": "raid_bdev1", 00:21:35.205 "uuid": "f53a5be2-cc13-4806-89c1-08ea91d5b2e0", 00:21:35.205 "strip_size_kb": 0, 00:21:35.205 "state": "online", 00:21:35.205 "raid_level": "raid1", 00:21:35.206 "superblock": true, 00:21:35.206 "num_base_bdevs": 2, 00:21:35.206 "num_base_bdevs_discovered": 1, 00:21:35.206 "num_base_bdevs_operational": 1, 00:21:35.206 "base_bdevs_list": [ 00:21:35.206 { 00:21:35.206 "name": null, 00:21:35.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.206 "is_configured": false, 00:21:35.206 "data_offset": 0, 00:21:35.206 "data_size": 7936 00:21:35.206 }, 00:21:35.206 { 00:21:35.206 "name": "BaseBdev2", 00:21:35.206 "uuid": "7dca197b-1023-539d-adf8-c51237eaa444", 00:21:35.206 "is_configured": true, 00:21:35.206 "data_offset": 256, 00:21:35.206 "data_size": 7936 00:21:35.206 } 00:21:35.206 ] 00:21:35.206 }' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88337 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88337 ']' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88337 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88337 00:21:35.206 killing process with pid 88337 00:21:35.206 Received shutdown signal, test time was about 60.000000 seconds 00:21:35.206 00:21:35.206 Latency(us) 00:21:35.206 [2024-11-26T06:31:19.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.206 [2024-11-26T06:31:19.343Z] =================================================================================================================== 00:21:35.206 [2024-11-26T06:31:19.343Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88337' 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88337 00:21:35.206 [2024-11-26 06:31:19.215409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.206 06:31:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88337 00:21:35.206 [2024-11-26 06:31:19.215573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.206 [2024-11-26 06:31:19.215629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.206 [2024-11-26 06:31:19.215641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:35.464 [2024-11-26 06:31:19.552519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.841 06:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:36.841 00:21:36.841 real 0m19.998s 00:21:36.841 user 0m25.872s 00:21:36.841 sys 0m2.850s 00:21:36.841 06:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.841 06:31:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:36.841 ************************************ 00:21:36.841 END TEST raid_rebuild_test_sb_md_separate 00:21:36.841 ************************************ 00:21:36.841 06:31:20 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:36.841 06:31:20 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:36.841 06:31:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:36.841 06:31:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.841 06:31:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.841 ************************************ 00:21:36.841 START TEST raid_state_function_test_sb_md_interleaved 00:21:36.841 ************************************ 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:36.841 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=89030 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 89030' 00:21:36.842 Process raid pid: 89030 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 89030 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89030 ']' 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.842 06:31:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.842 [2024-11-26 06:31:20.897282] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:36.842 [2024-11-26 06:31:20.897534] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.101 [2024-11-26 06:31:21.077025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.101 [2024-11-26 06:31:21.213723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.359 [2024-11-26 06:31:21.459653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.359 [2024-11-26 06:31:21.459704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 [2024-11-26 06:31:21.730783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.618 [2024-11-26 06:31:21.730841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.618 [2024-11-26 06:31:21.730851] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.618 [2024-11-26 06:31:21.730862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.618 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.877 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.877 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.877 "name": "Existed_Raid", 00:21:37.877 "uuid": "6621b81b-33e0-42c9-8c01-62c9da28349f", 00:21:37.877 "strip_size_kb": 0, 00:21:37.877 "state": "configuring", 00:21:37.877 "raid_level": "raid1", 00:21:37.877 "superblock": true, 00:21:37.877 "num_base_bdevs": 2, 00:21:37.877 "num_base_bdevs_discovered": 0, 00:21:37.877 "num_base_bdevs_operational": 2, 00:21:37.877 "base_bdevs_list": [ 00:21:37.877 { 00:21:37.877 "name": "BaseBdev1", 00:21:37.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.877 "is_configured": false, 00:21:37.877 "data_offset": 0, 00:21:37.877 "data_size": 0 00:21:37.877 }, 00:21:37.877 { 00:21:37.877 "name": "BaseBdev2", 00:21:37.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.877 "is_configured": false, 00:21:37.877 "data_offset": 0, 00:21:37.877 "data_size": 0 00:21:37.877 } 00:21:37.877 ] 00:21:37.877 }' 00:21:37.877 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.877 06:31:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.136 [2024-11-26 06:31:22.193926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.136 [2024-11-26 06:31:22.194029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.136 [2024-11-26 06:31:22.201887] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.136 [2024-11-26 06:31:22.201969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.136 [2024-11-26 06:31:22.201997] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.136 [2024-11-26 06:31:22.202024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.136 [2024-11-26 06:31:22.255033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.136 BaseBdev1 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.136 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.394 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.394 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:38.394 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.394 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.394 [ 00:21:38.394 { 00:21:38.394 "name": "BaseBdev1", 00:21:38.394 "aliases": [ 00:21:38.395 "2514a612-54a2-4ddb-8add-9bac42ac3f53" 00:21:38.395 ], 00:21:38.395 "product_name": "Malloc disk", 00:21:38.395 "block_size": 4128, 00:21:38.395 "num_blocks": 8192, 00:21:38.395 "uuid": "2514a612-54a2-4ddb-8add-9bac42ac3f53", 00:21:38.395 "md_size": 32, 00:21:38.395 "md_interleave": true, 00:21:38.395 "dif_type": 0, 00:21:38.395 "assigned_rate_limits": { 00:21:38.395 "rw_ios_per_sec": 0, 00:21:38.395 "rw_mbytes_per_sec": 0, 00:21:38.395 "r_mbytes_per_sec": 0, 00:21:38.395 "w_mbytes_per_sec": 0 00:21:38.395 }, 00:21:38.395 "claimed": true, 00:21:38.395 "claim_type": "exclusive_write", 00:21:38.395 "zoned": false, 00:21:38.395 "supported_io_types": { 00:21:38.395 "read": true, 00:21:38.395 "write": true, 00:21:38.395 "unmap": true, 00:21:38.395 "flush": true, 00:21:38.395 "reset": true, 00:21:38.395 "nvme_admin": false, 00:21:38.395 "nvme_io": false, 00:21:38.395 "nvme_io_md": false, 00:21:38.395 "write_zeroes": true, 00:21:38.395 "zcopy": true, 00:21:38.395 "get_zone_info": false, 00:21:38.395 "zone_management": false, 00:21:38.395 "zone_append": false, 00:21:38.395 "compare": false, 00:21:38.395 "compare_and_write": false, 00:21:38.395 "abort": true, 00:21:38.395 "seek_hole": false, 00:21:38.395 "seek_data": false, 00:21:38.395 "copy": true, 00:21:38.395 "nvme_iov_md": false 00:21:38.395 }, 00:21:38.395 "memory_domains": [ 00:21:38.395 { 00:21:38.395 "dma_device_id": "system", 00:21:38.395 "dma_device_type": 1 00:21:38.395 }, 00:21:38.395 { 00:21:38.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.395 "dma_device_type": 2 00:21:38.395 } 00:21:38.395 ], 00:21:38.395 "driver_specific": {} 00:21:38.395 } 00:21:38.395 ] 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.395 "name": "Existed_Raid", 00:21:38.395 "uuid": "e33775dd-33e4-44f8-8f0a-683d49c12da5", 00:21:38.395 "strip_size_kb": 0, 00:21:38.395 "state": "configuring", 00:21:38.395 "raid_level": "raid1", 00:21:38.395 "superblock": true, 00:21:38.395 "num_base_bdevs": 2, 00:21:38.395 "num_base_bdevs_discovered": 1, 00:21:38.395 "num_base_bdevs_operational": 2, 00:21:38.395 "base_bdevs_list": [ 00:21:38.395 { 00:21:38.395 "name": "BaseBdev1", 00:21:38.395 "uuid": "2514a612-54a2-4ddb-8add-9bac42ac3f53", 00:21:38.395 "is_configured": true, 00:21:38.395 "data_offset": 256, 00:21:38.395 "data_size": 7936 00:21:38.395 }, 00:21:38.395 { 00:21:38.395 "name": "BaseBdev2", 00:21:38.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.395 "is_configured": false, 00:21:38.395 "data_offset": 0, 00:21:38.395 "data_size": 0 00:21:38.395 } 00:21:38.395 ] 00:21:38.395 }' 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.395 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.652 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.652 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.652 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.652 [2024-11-26 06:31:22.782218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.652 [2024-11-26 06:31:22.782281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.910 [2024-11-26 06:31:22.790267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.910 [2024-11-26 06:31:22.792543] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.910 [2024-11-26 06:31:22.792624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.910 "name": "Existed_Raid", 00:21:38.910 "uuid": "73a2b4bf-dda1-451d-bc35-d97201624034", 00:21:38.910 "strip_size_kb": 0, 00:21:38.910 "state": "configuring", 00:21:38.910 "raid_level": "raid1", 00:21:38.910 "superblock": true, 00:21:38.910 "num_base_bdevs": 2, 00:21:38.910 "num_base_bdevs_discovered": 1, 00:21:38.910 "num_base_bdevs_operational": 2, 00:21:38.910 "base_bdevs_list": [ 00:21:38.910 { 00:21:38.910 "name": "BaseBdev1", 00:21:38.910 "uuid": "2514a612-54a2-4ddb-8add-9bac42ac3f53", 00:21:38.910 "is_configured": true, 00:21:38.910 "data_offset": 256, 00:21:38.910 "data_size": 7936 00:21:38.910 }, 00:21:38.910 { 00:21:38.910 "name": "BaseBdev2", 00:21:38.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.910 "is_configured": false, 00:21:38.910 "data_offset": 0, 00:21:38.910 "data_size": 0 00:21:38.910 } 00:21:38.910 ] 00:21:38.910 }' 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.910 06:31:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.168 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:39.168 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.168 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.168 [2024-11-26 06:31:23.300123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.168 [2024-11-26 06:31:23.300518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:39.168 [2024-11-26 06:31:23.300570] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:39.168 [2024-11-26 06:31:23.300713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:39.168 [2024-11-26 06:31:23.300833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:39.168 [2024-11-26 06:31:23.300871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:39.428 [2024-11-26 06:31:23.300994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.428 BaseBdev2 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.428 [ 00:21:39.428 { 00:21:39.428 "name": "BaseBdev2", 00:21:39.428 "aliases": [ 00:21:39.428 "50d730e7-d6d6-40c8-acca-cf57aca7b66d" 00:21:39.428 ], 00:21:39.428 "product_name": "Malloc disk", 00:21:39.428 "block_size": 4128, 00:21:39.428 "num_blocks": 8192, 00:21:39.428 "uuid": "50d730e7-d6d6-40c8-acca-cf57aca7b66d", 00:21:39.428 "md_size": 32, 00:21:39.428 "md_interleave": true, 00:21:39.428 "dif_type": 0, 00:21:39.428 "assigned_rate_limits": { 00:21:39.428 "rw_ios_per_sec": 0, 00:21:39.428 "rw_mbytes_per_sec": 0, 00:21:39.428 "r_mbytes_per_sec": 0, 00:21:39.428 "w_mbytes_per_sec": 0 00:21:39.428 }, 00:21:39.428 "claimed": true, 00:21:39.428 "claim_type": "exclusive_write", 00:21:39.428 "zoned": false, 00:21:39.428 "supported_io_types": { 00:21:39.428 "read": true, 00:21:39.428 "write": true, 00:21:39.428 "unmap": true, 00:21:39.428 "flush": true, 00:21:39.428 "reset": true, 00:21:39.428 "nvme_admin": false, 00:21:39.428 "nvme_io": false, 00:21:39.428 "nvme_io_md": false, 00:21:39.428 "write_zeroes": true, 00:21:39.428 "zcopy": true, 00:21:39.428 "get_zone_info": false, 00:21:39.428 "zone_management": false, 00:21:39.428 "zone_append": false, 00:21:39.428 "compare": false, 00:21:39.428 "compare_and_write": false, 00:21:39.428 "abort": true, 00:21:39.428 "seek_hole": false, 00:21:39.428 "seek_data": false, 00:21:39.428 "copy": true, 00:21:39.428 "nvme_iov_md": false 00:21:39.428 }, 00:21:39.428 "memory_domains": [ 00:21:39.428 { 00:21:39.428 "dma_device_id": "system", 00:21:39.428 "dma_device_type": 1 00:21:39.428 }, 00:21:39.428 { 00:21:39.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.428 "dma_device_type": 2 00:21:39.428 } 00:21:39.428 ], 00:21:39.428 "driver_specific": {} 00:21:39.428 } 00:21:39.428 ] 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.428 "name": "Existed_Raid", 00:21:39.428 "uuid": "73a2b4bf-dda1-451d-bc35-d97201624034", 00:21:39.428 "strip_size_kb": 0, 00:21:39.428 "state": "online", 00:21:39.428 "raid_level": "raid1", 00:21:39.428 "superblock": true, 00:21:39.428 "num_base_bdevs": 2, 00:21:39.428 "num_base_bdevs_discovered": 2, 00:21:39.428 "num_base_bdevs_operational": 2, 00:21:39.428 "base_bdevs_list": [ 00:21:39.428 { 00:21:39.428 "name": "BaseBdev1", 00:21:39.428 "uuid": "2514a612-54a2-4ddb-8add-9bac42ac3f53", 00:21:39.428 "is_configured": true, 00:21:39.428 "data_offset": 256, 00:21:39.428 "data_size": 7936 00:21:39.428 }, 00:21:39.428 { 00:21:39.428 "name": "BaseBdev2", 00:21:39.428 "uuid": "50d730e7-d6d6-40c8-acca-cf57aca7b66d", 00:21:39.428 "is_configured": true, 00:21:39.428 "data_offset": 256, 00:21:39.428 "data_size": 7936 00:21:39.428 } 00:21:39.428 ] 00:21:39.428 }' 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.428 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:39.686 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.945 [2024-11-26 06:31:23.823631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.945 "name": "Existed_Raid", 00:21:39.945 "aliases": [ 00:21:39.945 "73a2b4bf-dda1-451d-bc35-d97201624034" 00:21:39.945 ], 00:21:39.945 "product_name": "Raid Volume", 00:21:39.945 "block_size": 4128, 00:21:39.945 "num_blocks": 7936, 00:21:39.945 "uuid": "73a2b4bf-dda1-451d-bc35-d97201624034", 00:21:39.945 "md_size": 32, 00:21:39.945 "md_interleave": true, 00:21:39.945 "dif_type": 0, 00:21:39.945 "assigned_rate_limits": { 00:21:39.945 "rw_ios_per_sec": 0, 00:21:39.945 "rw_mbytes_per_sec": 0, 00:21:39.945 "r_mbytes_per_sec": 0, 00:21:39.945 "w_mbytes_per_sec": 0 00:21:39.945 }, 00:21:39.945 "claimed": false, 00:21:39.945 "zoned": false, 00:21:39.945 "supported_io_types": { 00:21:39.945 "read": true, 00:21:39.945 "write": true, 00:21:39.945 "unmap": false, 00:21:39.945 "flush": false, 00:21:39.945 "reset": true, 00:21:39.945 "nvme_admin": false, 00:21:39.945 "nvme_io": false, 00:21:39.945 "nvme_io_md": false, 00:21:39.945 "write_zeroes": true, 00:21:39.945 "zcopy": false, 00:21:39.945 "get_zone_info": false, 00:21:39.945 "zone_management": false, 00:21:39.945 "zone_append": false, 00:21:39.945 "compare": false, 00:21:39.945 "compare_and_write": false, 00:21:39.945 "abort": false, 00:21:39.945 "seek_hole": false, 00:21:39.945 "seek_data": false, 00:21:39.945 "copy": false, 00:21:39.945 "nvme_iov_md": false 00:21:39.945 }, 00:21:39.945 "memory_domains": [ 00:21:39.945 { 00:21:39.945 "dma_device_id": "system", 00:21:39.945 "dma_device_type": 1 00:21:39.945 }, 00:21:39.945 { 00:21:39.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.945 "dma_device_type": 2 00:21:39.945 }, 00:21:39.945 { 00:21:39.945 "dma_device_id": "system", 00:21:39.945 "dma_device_type": 1 00:21:39.945 }, 00:21:39.945 { 00:21:39.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.945 "dma_device_type": 2 00:21:39.945 } 00:21:39.945 ], 00:21:39.945 "driver_specific": { 00:21:39.945 "raid": { 00:21:39.945 "uuid": "73a2b4bf-dda1-451d-bc35-d97201624034", 00:21:39.945 "strip_size_kb": 0, 00:21:39.945 "state": "online", 00:21:39.945 "raid_level": "raid1", 00:21:39.945 "superblock": true, 00:21:39.945 "num_base_bdevs": 2, 00:21:39.945 "num_base_bdevs_discovered": 2, 00:21:39.945 "num_base_bdevs_operational": 2, 00:21:39.945 "base_bdevs_list": [ 00:21:39.945 { 00:21:39.945 "name": "BaseBdev1", 00:21:39.945 "uuid": "2514a612-54a2-4ddb-8add-9bac42ac3f53", 00:21:39.945 "is_configured": true, 00:21:39.945 "data_offset": 256, 00:21:39.945 "data_size": 7936 00:21:39.945 }, 00:21:39.945 { 00:21:39.945 "name": "BaseBdev2", 00:21:39.945 "uuid": "50d730e7-d6d6-40c8-acca-cf57aca7b66d", 00:21:39.945 "is_configured": true, 00:21:39.945 "data_offset": 256, 00:21:39.945 "data_size": 7936 00:21:39.945 } 00:21:39.945 ] 00:21:39.945 } 00:21:39.945 } 00:21:39.945 }' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:39.945 BaseBdev2' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.945 06:31:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.945 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.945 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:39.945 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:39.945 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:39.945 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.945 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.945 [2024-11-26 06:31:24.026966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.204 "name": "Existed_Raid", 00:21:40.204 "uuid": "73a2b4bf-dda1-451d-bc35-d97201624034", 00:21:40.204 "strip_size_kb": 0, 00:21:40.204 "state": "online", 00:21:40.204 "raid_level": "raid1", 00:21:40.204 "superblock": true, 00:21:40.204 "num_base_bdevs": 2, 00:21:40.204 "num_base_bdevs_discovered": 1, 00:21:40.204 "num_base_bdevs_operational": 1, 00:21:40.204 "base_bdevs_list": [ 00:21:40.204 { 00:21:40.204 "name": null, 00:21:40.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.204 "is_configured": false, 00:21:40.204 "data_offset": 0, 00:21:40.204 "data_size": 7936 00:21:40.204 }, 00:21:40.204 { 00:21:40.204 "name": "BaseBdev2", 00:21:40.204 "uuid": "50d730e7-d6d6-40c8-acca-cf57aca7b66d", 00:21:40.204 "is_configured": true, 00:21:40.204 "data_offset": 256, 00:21:40.204 "data_size": 7936 00:21:40.204 } 00:21:40.204 ] 00:21:40.204 }' 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.204 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.462 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:40.462 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.721 [2024-11-26 06:31:24.627622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.721 [2024-11-26 06:31:24.627755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.721 [2024-11-26 06:31:24.731686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.721 [2024-11-26 06:31:24.731754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.721 [2024-11-26 06:31:24.731768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 89030 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89030 ']' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89030 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89030 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89030' 00:21:40.721 killing process with pid 89030 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89030 00:21:40.721 06:31:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89030 00:21:40.721 [2024-11-26 06:31:24.827443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:40.721 [2024-11-26 06:31:24.847226] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:42.099 06:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:42.099 ************************************ 00:21:42.099 END TEST raid_state_function_test_sb_md_interleaved 00:21:42.099 ************************************ 00:21:42.099 00:21:42.099 real 0m5.228s 00:21:42.099 user 0m7.376s 00:21:42.099 sys 0m1.042s 00:21:42.099 06:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.099 06:31:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.099 06:31:26 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:42.099 06:31:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:42.099 06:31:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.099 06:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:42.099 ************************************ 00:21:42.099 START TEST raid_superblock_test_md_interleaved 00:21:42.099 ************************************ 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89282 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89282 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89282 ']' 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.099 06:31:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.099 [2024-11-26 06:31:26.192795] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:42.099 [2024-11-26 06:31:26.193064] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89282 ] 00:21:42.358 [2024-11-26 06:31:26.373320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.617 [2024-11-26 06:31:26.512827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.617 [2024-11-26 06:31:26.742915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.617 [2024-11-26 06:31:26.742976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.187 malloc1 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.187 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.187 [2024-11-26 06:31:27.074243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.187 [2024-11-26 06:31:27.074343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.187 [2024-11-26 06:31:27.074410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:43.187 [2024-11-26 06:31:27.074439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.187 [2024-11-26 06:31:27.076638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.188 [2024-11-26 06:31:27.076708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.188 pt1 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.188 malloc2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.188 [2024-11-26 06:31:27.137836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:43.188 [2024-11-26 06:31:27.137941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.188 [2024-11-26 06:31:27.137983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:43.188 [2024-11-26 06:31:27.138011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.188 [2024-11-26 06:31:27.140162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.188 [2024-11-26 06:31:27.140221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:43.188 pt2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.188 [2024-11-26 06:31:27.149864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.188 [2024-11-26 06:31:27.151978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.188 [2024-11-26 06:31:27.152177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:43.188 [2024-11-26 06:31:27.152191] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:43.188 [2024-11-26 06:31:27.152281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:43.188 [2024-11-26 06:31:27.152360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:43.188 [2024-11-26 06:31:27.152372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:43.188 [2024-11-26 06:31:27.152453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.188 "name": "raid_bdev1", 00:21:43.188 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:43.188 "strip_size_kb": 0, 00:21:43.188 "state": "online", 00:21:43.188 "raid_level": "raid1", 00:21:43.188 "superblock": true, 00:21:43.188 "num_base_bdevs": 2, 00:21:43.188 "num_base_bdevs_discovered": 2, 00:21:43.188 "num_base_bdevs_operational": 2, 00:21:43.188 "base_bdevs_list": [ 00:21:43.188 { 00:21:43.188 "name": "pt1", 00:21:43.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.188 "is_configured": true, 00:21:43.188 "data_offset": 256, 00:21:43.188 "data_size": 7936 00:21:43.188 }, 00:21:43.188 { 00:21:43.188 "name": "pt2", 00:21:43.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.188 "is_configured": true, 00:21:43.188 "data_offset": 256, 00:21:43.188 "data_size": 7936 00:21:43.188 } 00:21:43.188 ] 00:21:43.188 }' 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.188 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.448 [2024-11-26 06:31:27.541508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.448 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.708 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:43.708 "name": "raid_bdev1", 00:21:43.708 "aliases": [ 00:21:43.708 "7bd3684a-0d06-426f-88da-d057cf76466a" 00:21:43.708 ], 00:21:43.708 "product_name": "Raid Volume", 00:21:43.708 "block_size": 4128, 00:21:43.708 "num_blocks": 7936, 00:21:43.708 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:43.708 "md_size": 32, 00:21:43.708 "md_interleave": true, 00:21:43.708 "dif_type": 0, 00:21:43.708 "assigned_rate_limits": { 00:21:43.708 "rw_ios_per_sec": 0, 00:21:43.708 "rw_mbytes_per_sec": 0, 00:21:43.708 "r_mbytes_per_sec": 0, 00:21:43.708 "w_mbytes_per_sec": 0 00:21:43.708 }, 00:21:43.708 "claimed": false, 00:21:43.708 "zoned": false, 00:21:43.708 "supported_io_types": { 00:21:43.708 "read": true, 00:21:43.708 "write": true, 00:21:43.708 "unmap": false, 00:21:43.708 "flush": false, 00:21:43.708 "reset": true, 00:21:43.708 "nvme_admin": false, 00:21:43.708 "nvme_io": false, 00:21:43.708 "nvme_io_md": false, 00:21:43.709 "write_zeroes": true, 00:21:43.709 "zcopy": false, 00:21:43.709 "get_zone_info": false, 00:21:43.709 "zone_management": false, 00:21:43.709 "zone_append": false, 00:21:43.709 "compare": false, 00:21:43.709 "compare_and_write": false, 00:21:43.709 "abort": false, 00:21:43.709 "seek_hole": false, 00:21:43.709 "seek_data": false, 00:21:43.709 "copy": false, 00:21:43.709 "nvme_iov_md": false 00:21:43.709 }, 00:21:43.709 "memory_domains": [ 00:21:43.709 { 00:21:43.709 "dma_device_id": "system", 00:21:43.709 "dma_device_type": 1 00:21:43.709 }, 00:21:43.709 { 00:21:43.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.709 "dma_device_type": 2 00:21:43.709 }, 00:21:43.709 { 00:21:43.709 "dma_device_id": "system", 00:21:43.709 "dma_device_type": 1 00:21:43.709 }, 00:21:43.709 { 00:21:43.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.709 "dma_device_type": 2 00:21:43.709 } 00:21:43.709 ], 00:21:43.709 "driver_specific": { 00:21:43.709 "raid": { 00:21:43.709 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:43.709 "strip_size_kb": 0, 00:21:43.709 "state": "online", 00:21:43.709 "raid_level": "raid1", 00:21:43.709 "superblock": true, 00:21:43.709 "num_base_bdevs": 2, 00:21:43.709 "num_base_bdevs_discovered": 2, 00:21:43.709 "num_base_bdevs_operational": 2, 00:21:43.709 "base_bdevs_list": [ 00:21:43.709 { 00:21:43.709 "name": "pt1", 00:21:43.709 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.709 "is_configured": true, 00:21:43.709 "data_offset": 256, 00:21:43.709 "data_size": 7936 00:21:43.709 }, 00:21:43.709 { 00:21:43.709 "name": "pt2", 00:21:43.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.709 "is_configured": true, 00:21:43.709 "data_offset": 256, 00:21:43.709 "data_size": 7936 00:21:43.709 } 00:21:43.709 ] 00:21:43.709 } 00:21:43.709 } 00:21:43.709 }' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:43.709 pt2' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.709 [2024-11-26 06:31:27.761044] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7bd3684a-0d06-426f-88da-d057cf76466a 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 7bd3684a-0d06-426f-88da-d057cf76466a ']' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.709 [2024-11-26 06:31:27.804673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.709 [2024-11-26 06:31:27.804699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.709 [2024-11-26 06:31:27.804799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.709 [2024-11-26 06:31:27.804864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.709 [2024-11-26 06:31:27.804877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.709 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 [2024-11-26 06:31:27.944478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:43.969 [2024-11-26 06:31:27.946726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:43.969 [2024-11-26 06:31:27.946867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:43.969 [2024-11-26 06:31:27.946996] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:43.969 [2024-11-26 06:31:27.947118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.969 [2024-11-26 06:31:27.947166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:43.969 request: 00:21:43.969 { 00:21:43.969 "name": "raid_bdev1", 00:21:43.969 "raid_level": "raid1", 00:21:43.969 "base_bdevs": [ 00:21:43.969 "malloc1", 00:21:43.969 "malloc2" 00:21:43.969 ], 00:21:43.969 "superblock": false, 00:21:43.969 "method": "bdev_raid_create", 00:21:43.969 "req_id": 1 00:21:43.969 } 00:21:43.969 Got JSON-RPC error response 00:21:43.969 response: 00:21:43.969 { 00:21:43.969 "code": -17, 00:21:43.969 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:43.969 } 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 06:31:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.969 [2024-11-26 06:31:28.012368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.969 [2024-11-26 06:31:28.012502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.969 [2024-11-26 06:31:28.012541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:43.969 [2024-11-26 06:31:28.012556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.969 [2024-11-26 06:31:28.014920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.969 [2024-11-26 06:31:28.014960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.969 [2024-11-26 06:31:28.015028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.969 [2024-11-26 06:31:28.015131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.969 pt1 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.969 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.970 "name": "raid_bdev1", 00:21:43.970 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:43.970 "strip_size_kb": 0, 00:21:43.970 "state": "configuring", 00:21:43.970 "raid_level": "raid1", 00:21:43.970 "superblock": true, 00:21:43.970 "num_base_bdevs": 2, 00:21:43.970 "num_base_bdevs_discovered": 1, 00:21:43.970 "num_base_bdevs_operational": 2, 00:21:43.970 "base_bdevs_list": [ 00:21:43.970 { 00:21:43.970 "name": "pt1", 00:21:43.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.970 "is_configured": true, 00:21:43.970 "data_offset": 256, 00:21:43.970 "data_size": 7936 00:21:43.970 }, 00:21:43.970 { 00:21:43.970 "name": null, 00:21:43.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.970 "is_configured": false, 00:21:43.970 "data_offset": 256, 00:21:43.970 "data_size": 7936 00:21:43.970 } 00:21:43.970 ] 00:21:43.970 }' 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.970 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.537 [2024-11-26 06:31:28.455690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:44.537 [2024-11-26 06:31:28.455853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.537 [2024-11-26 06:31:28.455898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:44.537 [2024-11-26 06:31:28.455947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.537 [2024-11-26 06:31:28.456232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.537 [2024-11-26 06:31:28.456285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:44.537 [2024-11-26 06:31:28.456396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:44.537 [2024-11-26 06:31:28.456465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.537 [2024-11-26 06:31:28.456618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:44.537 [2024-11-26 06:31:28.456661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:44.537 [2024-11-26 06:31:28.456786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:44.537 [2024-11-26 06:31:28.456914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:44.537 [2024-11-26 06:31:28.456954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:44.537 [2024-11-26 06:31:28.457133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.537 pt2 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.537 "name": "raid_bdev1", 00:21:44.537 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:44.537 "strip_size_kb": 0, 00:21:44.537 "state": "online", 00:21:44.537 "raid_level": "raid1", 00:21:44.537 "superblock": true, 00:21:44.537 "num_base_bdevs": 2, 00:21:44.537 "num_base_bdevs_discovered": 2, 00:21:44.537 "num_base_bdevs_operational": 2, 00:21:44.537 "base_bdevs_list": [ 00:21:44.537 { 00:21:44.537 "name": "pt1", 00:21:44.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:44.537 "is_configured": true, 00:21:44.537 "data_offset": 256, 00:21:44.537 "data_size": 7936 00:21:44.537 }, 00:21:44.537 { 00:21:44.537 "name": "pt2", 00:21:44.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.537 "is_configured": true, 00:21:44.537 "data_offset": 256, 00:21:44.537 "data_size": 7936 00:21:44.537 } 00:21:44.537 ] 00:21:44.537 }' 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.537 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.797 [2024-11-26 06:31:28.907217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.797 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.056 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:45.056 "name": "raid_bdev1", 00:21:45.056 "aliases": [ 00:21:45.056 "7bd3684a-0d06-426f-88da-d057cf76466a" 00:21:45.056 ], 00:21:45.056 "product_name": "Raid Volume", 00:21:45.056 "block_size": 4128, 00:21:45.056 "num_blocks": 7936, 00:21:45.056 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:45.056 "md_size": 32, 00:21:45.056 "md_interleave": true, 00:21:45.056 "dif_type": 0, 00:21:45.056 "assigned_rate_limits": { 00:21:45.056 "rw_ios_per_sec": 0, 00:21:45.056 "rw_mbytes_per_sec": 0, 00:21:45.056 "r_mbytes_per_sec": 0, 00:21:45.056 "w_mbytes_per_sec": 0 00:21:45.056 }, 00:21:45.056 "claimed": false, 00:21:45.056 "zoned": false, 00:21:45.056 "supported_io_types": { 00:21:45.056 "read": true, 00:21:45.056 "write": true, 00:21:45.056 "unmap": false, 00:21:45.056 "flush": false, 00:21:45.056 "reset": true, 00:21:45.056 "nvme_admin": false, 00:21:45.056 "nvme_io": false, 00:21:45.056 "nvme_io_md": false, 00:21:45.056 "write_zeroes": true, 00:21:45.057 "zcopy": false, 00:21:45.057 "get_zone_info": false, 00:21:45.057 "zone_management": false, 00:21:45.057 "zone_append": false, 00:21:45.057 "compare": false, 00:21:45.057 "compare_and_write": false, 00:21:45.057 "abort": false, 00:21:45.057 "seek_hole": false, 00:21:45.057 "seek_data": false, 00:21:45.057 "copy": false, 00:21:45.057 "nvme_iov_md": false 00:21:45.057 }, 00:21:45.057 "memory_domains": [ 00:21:45.057 { 00:21:45.057 "dma_device_id": "system", 00:21:45.057 "dma_device_type": 1 00:21:45.057 }, 00:21:45.057 { 00:21:45.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.057 "dma_device_type": 2 00:21:45.057 }, 00:21:45.057 { 00:21:45.057 "dma_device_id": "system", 00:21:45.057 "dma_device_type": 1 00:21:45.057 }, 00:21:45.057 { 00:21:45.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.057 "dma_device_type": 2 00:21:45.057 } 00:21:45.057 ], 00:21:45.057 "driver_specific": { 00:21:45.057 "raid": { 00:21:45.057 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:45.057 "strip_size_kb": 0, 00:21:45.057 "state": "online", 00:21:45.057 "raid_level": "raid1", 00:21:45.057 "superblock": true, 00:21:45.057 "num_base_bdevs": 2, 00:21:45.057 "num_base_bdevs_discovered": 2, 00:21:45.057 "num_base_bdevs_operational": 2, 00:21:45.057 "base_bdevs_list": [ 00:21:45.057 { 00:21:45.057 "name": "pt1", 00:21:45.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:45.057 "is_configured": true, 00:21:45.057 "data_offset": 256, 00:21:45.057 "data_size": 7936 00:21:45.057 }, 00:21:45.057 { 00:21:45.057 "name": "pt2", 00:21:45.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.057 "is_configured": true, 00:21:45.057 "data_offset": 256, 00:21:45.057 "data_size": 7936 00:21:45.057 } 00:21:45.057 ] 00:21:45.057 } 00:21:45.057 } 00:21:45.057 }' 00:21:45.057 06:31:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:45.057 pt2' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 [2024-11-26 06:31:29.126771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 7bd3684a-0d06-426f-88da-d057cf76466a '!=' 7bd3684a-0d06-426f-88da-d057cf76466a ']' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.057 [2024-11-26 06:31:29.174545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.057 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.316 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.316 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.316 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.316 "name": "raid_bdev1", 00:21:45.316 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:45.316 "strip_size_kb": 0, 00:21:45.316 "state": "online", 00:21:45.316 "raid_level": "raid1", 00:21:45.316 "superblock": true, 00:21:45.316 "num_base_bdevs": 2, 00:21:45.316 "num_base_bdevs_discovered": 1, 00:21:45.316 "num_base_bdevs_operational": 1, 00:21:45.316 "base_bdevs_list": [ 00:21:45.316 { 00:21:45.316 "name": null, 00:21:45.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.316 "is_configured": false, 00:21:45.316 "data_offset": 0, 00:21:45.316 "data_size": 7936 00:21:45.316 }, 00:21:45.316 { 00:21:45.316 "name": "pt2", 00:21:45.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.316 "is_configured": true, 00:21:45.316 "data_offset": 256, 00:21:45.316 "data_size": 7936 00:21:45.316 } 00:21:45.316 ] 00:21:45.316 }' 00:21:45.316 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.316 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.575 [2024-11-26 06:31:29.629735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.575 [2024-11-26 06:31:29.629771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.575 [2024-11-26 06:31:29.629875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.575 [2024-11-26 06:31:29.629937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.575 [2024-11-26 06:31:29.629950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:45.575 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.576 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.576 [2024-11-26 06:31:29.705556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:45.576 [2024-11-26 06:31:29.705619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.576 [2024-11-26 06:31:29.705638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:45.576 [2024-11-26 06:31:29.705648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.834 [2024-11-26 06:31:29.707966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.834 [2024-11-26 06:31:29.708007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:45.834 [2024-11-26 06:31:29.708080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:45.834 [2024-11-26 06:31:29.708172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:45.834 [2024-11-26 06:31:29.708251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:45.834 [2024-11-26 06:31:29.708264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:45.834 [2024-11-26 06:31:29.708360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:45.834 [2024-11-26 06:31:29.708460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:45.834 [2024-11-26 06:31:29.708474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:45.834 [2024-11-26 06:31:29.708582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.834 pt2 00:21:45.834 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.834 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.835 "name": "raid_bdev1", 00:21:45.835 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:45.835 "strip_size_kb": 0, 00:21:45.835 "state": "online", 00:21:45.835 "raid_level": "raid1", 00:21:45.835 "superblock": true, 00:21:45.835 "num_base_bdevs": 2, 00:21:45.835 "num_base_bdevs_discovered": 1, 00:21:45.835 "num_base_bdevs_operational": 1, 00:21:45.835 "base_bdevs_list": [ 00:21:45.835 { 00:21:45.835 "name": null, 00:21:45.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.835 "is_configured": false, 00:21:45.835 "data_offset": 256, 00:21:45.835 "data_size": 7936 00:21:45.835 }, 00:21:45.835 { 00:21:45.835 "name": "pt2", 00:21:45.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.835 "is_configured": true, 00:21:45.835 "data_offset": 256, 00:21:45.835 "data_size": 7936 00:21:45.835 } 00:21:45.835 ] 00:21:45.835 }' 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.835 06:31:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.094 [2024-11-26 06:31:30.184756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.094 [2024-11-26 06:31:30.184854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.094 [2024-11-26 06:31:30.184965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.094 [2024-11-26 06:31:30.185080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.094 [2024-11-26 06:31:30.185134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.094 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.354 [2024-11-26 06:31:30.248666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:46.354 [2024-11-26 06:31:30.248788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.354 [2024-11-26 06:31:30.248835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:46.354 [2024-11-26 06:31:30.248870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.354 [2024-11-26 06:31:30.251329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.354 [2024-11-26 06:31:30.251400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:46.354 [2024-11-26 06:31:30.251488] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:46.354 [2024-11-26 06:31:30.251591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:46.354 [2024-11-26 06:31:30.251761] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:46.354 [2024-11-26 06:31:30.251778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.354 [2024-11-26 06:31:30.251802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:46.354 [2024-11-26 06:31:30.251873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:46.354 [2024-11-26 06:31:30.251953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:46.354 [2024-11-26 06:31:30.251962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:46.354 [2024-11-26 06:31:30.252041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:46.354 [2024-11-26 06:31:30.252180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:46.354 [2024-11-26 06:31:30.252227] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:46.354 [2024-11-26 06:31:30.252433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.354 pt1 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.354 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.354 "name": "raid_bdev1", 00:21:46.354 "uuid": "7bd3684a-0d06-426f-88da-d057cf76466a", 00:21:46.354 "strip_size_kb": 0, 00:21:46.354 "state": "online", 00:21:46.354 "raid_level": "raid1", 00:21:46.354 "superblock": true, 00:21:46.354 "num_base_bdevs": 2, 00:21:46.354 "num_base_bdevs_discovered": 1, 00:21:46.354 "num_base_bdevs_operational": 1, 00:21:46.354 "base_bdevs_list": [ 00:21:46.354 { 00:21:46.354 "name": null, 00:21:46.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.354 "is_configured": false, 00:21:46.354 "data_offset": 256, 00:21:46.354 "data_size": 7936 00:21:46.354 }, 00:21:46.354 { 00:21:46.354 "name": "pt2", 00:21:46.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:46.354 "is_configured": true, 00:21:46.354 "data_offset": 256, 00:21:46.354 "data_size": 7936 00:21:46.354 } 00:21:46.354 ] 00:21:46.354 }' 00:21:46.355 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.355 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.613 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:46.613 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.613 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.614 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:46.873 [2024-11-26 06:31:30.748157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 7bd3684a-0d06-426f-88da-d057cf76466a '!=' 7bd3684a-0d06-426f-88da-d057cf76466a ']' 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89282 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89282 ']' 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89282 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89282 00:21:46.873 killing process with pid 89282 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89282' 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89282 00:21:46.873 [2024-11-26 06:31:30.832700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.873 [2024-11-26 06:31:30.832812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.873 06:31:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89282 00:21:46.873 [2024-11-26 06:31:30.832869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.873 [2024-11-26 06:31:30.832887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:47.137 [2024-11-26 06:31:31.062840] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:48.522 06:31:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:48.522 00:21:48.522 real 0m6.174s 00:21:48.522 user 0m9.129s 00:21:48.522 sys 0m1.257s 00:21:48.522 ************************************ 00:21:48.522 END TEST raid_superblock_test_md_interleaved 00:21:48.522 ************************************ 00:21:48.522 06:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.522 06:31:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.522 06:31:32 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:48.522 06:31:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:48.522 06:31:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.522 06:31:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:48.522 ************************************ 00:21:48.522 START TEST raid_rebuild_test_sb_md_interleaved 00:21:48.522 ************************************ 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89605 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89605 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89605 ']' 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.522 06:31:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:48.522 [2024-11-26 06:31:32.444216] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:21:48.522 [2024-11-26 06:31:32.444414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89605 ] 00:21:48.522 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:48.522 Zero copy mechanism will not be used. 00:21:48.522 [2024-11-26 06:31:32.625689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.782 [2024-11-26 06:31:32.765610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.043 [2024-11-26 06:31:33.015414] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.043 [2024-11-26 06:31:33.015516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:49.303 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.303 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 BaseBdev1_malloc 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 [2024-11-26 06:31:33.320582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:49.304 [2024-11-26 06:31:33.320687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.304 [2024-11-26 06:31:33.320737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:49.304 [2024-11-26 06:31:33.320750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.304 [2024-11-26 06:31:33.322959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.304 [2024-11-26 06:31:33.323001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:49.304 BaseBdev1 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 BaseBdev2_malloc 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.304 [2024-11-26 06:31:33.380159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:49.304 [2024-11-26 06:31:33.380237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.304 [2024-11-26 06:31:33.380265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:49.304 [2024-11-26 06:31:33.380279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.304 [2024-11-26 06:31:33.382550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.304 [2024-11-26 06:31:33.382628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:49.304 BaseBdev2 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.304 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.564 spare_malloc 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.564 spare_delay 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.564 [2024-11-26 06:31:33.465030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:49.564 [2024-11-26 06:31:33.465109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:49.564 [2024-11-26 06:31:33.465133] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:49.564 [2024-11-26 06:31:33.465144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:49.564 [2024-11-26 06:31:33.467299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:49.564 [2024-11-26 06:31:33.467377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:49.564 spare 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.564 [2024-11-26 06:31:33.477050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:49.564 [2024-11-26 06:31:33.479168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.564 [2024-11-26 06:31:33.479360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:49.564 [2024-11-26 06:31:33.479376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:49.564 [2024-11-26 06:31:33.479494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:49.564 [2024-11-26 06:31:33.479585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:49.564 [2024-11-26 06:31:33.479593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:49.564 [2024-11-26 06:31:33.479664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.564 "name": "raid_bdev1", 00:21:49.564 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:49.564 "strip_size_kb": 0, 00:21:49.564 "state": "online", 00:21:49.564 "raid_level": "raid1", 00:21:49.564 "superblock": true, 00:21:49.564 "num_base_bdevs": 2, 00:21:49.564 "num_base_bdevs_discovered": 2, 00:21:49.564 "num_base_bdevs_operational": 2, 00:21:49.564 "base_bdevs_list": [ 00:21:49.564 { 00:21:49.564 "name": "BaseBdev1", 00:21:49.564 "uuid": "3f2ec3d5-c8ea-5802-973a-ff5391cf0aa7", 00:21:49.564 "is_configured": true, 00:21:49.564 "data_offset": 256, 00:21:49.564 "data_size": 7936 00:21:49.564 }, 00:21:49.564 { 00:21:49.564 "name": "BaseBdev2", 00:21:49.564 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:49.564 "is_configured": true, 00:21:49.564 "data_offset": 256, 00:21:49.564 "data_size": 7936 00:21:49.564 } 00:21:49.564 ] 00:21:49.564 }' 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.564 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.824 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.824 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:49.824 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.824 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.824 [2024-11-26 06:31:33.940629] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:50.085 06:31:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.085 [2024-11-26 06:31:34.032141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.085 "name": "raid_bdev1", 00:21:50.085 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:50.085 "strip_size_kb": 0, 00:21:50.085 "state": "online", 00:21:50.085 "raid_level": "raid1", 00:21:50.085 "superblock": true, 00:21:50.085 "num_base_bdevs": 2, 00:21:50.085 "num_base_bdevs_discovered": 1, 00:21:50.085 "num_base_bdevs_operational": 1, 00:21:50.085 "base_bdevs_list": [ 00:21:50.085 { 00:21:50.085 "name": null, 00:21:50.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.085 "is_configured": false, 00:21:50.085 "data_offset": 0, 00:21:50.085 "data_size": 7936 00:21:50.085 }, 00:21:50.085 { 00:21:50.085 "name": "BaseBdev2", 00:21:50.085 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:50.085 "is_configured": true, 00:21:50.085 "data_offset": 256, 00:21:50.085 "data_size": 7936 00:21:50.085 } 00:21:50.085 ] 00:21:50.085 }' 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.085 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.656 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:50.656 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.656 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.656 [2024-11-26 06:31:34.527377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:50.656 [2024-11-26 06:31:34.545548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:50.656 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.656 06:31:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:50.656 [2024-11-26 06:31:34.547858] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.595 "name": "raid_bdev1", 00:21:51.595 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:51.595 "strip_size_kb": 0, 00:21:51.595 "state": "online", 00:21:51.595 "raid_level": "raid1", 00:21:51.595 "superblock": true, 00:21:51.595 "num_base_bdevs": 2, 00:21:51.595 "num_base_bdevs_discovered": 2, 00:21:51.595 "num_base_bdevs_operational": 2, 00:21:51.595 "process": { 00:21:51.595 "type": "rebuild", 00:21:51.595 "target": "spare", 00:21:51.595 "progress": { 00:21:51.595 "blocks": 2560, 00:21:51.595 "percent": 32 00:21:51.595 } 00:21:51.595 }, 00:21:51.595 "base_bdevs_list": [ 00:21:51.595 { 00:21:51.595 "name": "spare", 00:21:51.595 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:51.595 "is_configured": true, 00:21:51.595 "data_offset": 256, 00:21:51.595 "data_size": 7936 00:21:51.595 }, 00:21:51.595 { 00:21:51.595 "name": "BaseBdev2", 00:21:51.595 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:51.595 "is_configured": true, 00:21:51.595 "data_offset": 256, 00:21:51.595 "data_size": 7936 00:21:51.595 } 00:21:51.595 ] 00:21:51.595 }' 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.595 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.595 [2024-11-26 06:31:35.711523] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:51.855 [2024-11-26 06:31:35.757263] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:51.855 [2024-11-26 06:31:35.757383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.855 [2024-11-26 06:31:35.757400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:51.855 [2024-11-26 06:31:35.757415] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:51.855 "name": "raid_bdev1", 00:21:51.855 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:51.855 "strip_size_kb": 0, 00:21:51.855 "state": "online", 00:21:51.855 "raid_level": "raid1", 00:21:51.855 "superblock": true, 00:21:51.855 "num_base_bdevs": 2, 00:21:51.855 "num_base_bdevs_discovered": 1, 00:21:51.855 "num_base_bdevs_operational": 1, 00:21:51.855 "base_bdevs_list": [ 00:21:51.855 { 00:21:51.855 "name": null, 00:21:51.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.855 "is_configured": false, 00:21:51.855 "data_offset": 0, 00:21:51.855 "data_size": 7936 00:21:51.855 }, 00:21:51.855 { 00:21:51.855 "name": "BaseBdev2", 00:21:51.855 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:51.855 "is_configured": true, 00:21:51.855 "data_offset": 256, 00:21:51.855 "data_size": 7936 00:21:51.855 } 00:21:51.855 ] 00:21:51.855 }' 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:51.855 06:31:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:52.115 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.115 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.115 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:52.115 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:52.115 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.374 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.374 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.374 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.374 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:52.374 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.374 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.374 "name": "raid_bdev1", 00:21:52.374 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:52.374 "strip_size_kb": 0, 00:21:52.374 "state": "online", 00:21:52.374 "raid_level": "raid1", 00:21:52.374 "superblock": true, 00:21:52.374 "num_base_bdevs": 2, 00:21:52.374 "num_base_bdevs_discovered": 1, 00:21:52.374 "num_base_bdevs_operational": 1, 00:21:52.374 "base_bdevs_list": [ 00:21:52.374 { 00:21:52.374 "name": null, 00:21:52.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.374 "is_configured": false, 00:21:52.374 "data_offset": 0, 00:21:52.374 "data_size": 7936 00:21:52.374 }, 00:21:52.374 { 00:21:52.374 "name": "BaseBdev2", 00:21:52.374 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:52.374 "is_configured": true, 00:21:52.375 "data_offset": 256, 00:21:52.375 "data_size": 7936 00:21:52.375 } 00:21:52.375 ] 00:21:52.375 }' 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:52.375 [2024-11-26 06:31:36.378669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:52.375 [2024-11-26 06:31:36.397040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.375 06:31:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:52.375 [2024-11-26 06:31:36.399501] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.315 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.575 "name": "raid_bdev1", 00:21:53.575 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:53.575 "strip_size_kb": 0, 00:21:53.575 "state": "online", 00:21:53.575 "raid_level": "raid1", 00:21:53.575 "superblock": true, 00:21:53.575 "num_base_bdevs": 2, 00:21:53.575 "num_base_bdevs_discovered": 2, 00:21:53.575 "num_base_bdevs_operational": 2, 00:21:53.575 "process": { 00:21:53.575 "type": "rebuild", 00:21:53.575 "target": "spare", 00:21:53.575 "progress": { 00:21:53.575 "blocks": 2560, 00:21:53.575 "percent": 32 00:21:53.575 } 00:21:53.575 }, 00:21:53.575 "base_bdevs_list": [ 00:21:53.575 { 00:21:53.575 "name": "spare", 00:21:53.575 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:53.575 "is_configured": true, 00:21:53.575 "data_offset": 256, 00:21:53.575 "data_size": 7936 00:21:53.575 }, 00:21:53.575 { 00:21:53.575 "name": "BaseBdev2", 00:21:53.575 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:53.575 "is_configured": true, 00:21:53.575 "data_offset": 256, 00:21:53.575 "data_size": 7936 00:21:53.575 } 00:21:53.575 ] 00:21:53.575 }' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:53.575 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=770 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.575 "name": "raid_bdev1", 00:21:53.575 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:53.575 "strip_size_kb": 0, 00:21:53.575 "state": "online", 00:21:53.575 "raid_level": "raid1", 00:21:53.575 "superblock": true, 00:21:53.575 "num_base_bdevs": 2, 00:21:53.575 "num_base_bdevs_discovered": 2, 00:21:53.575 "num_base_bdevs_operational": 2, 00:21:53.575 "process": { 00:21:53.575 "type": "rebuild", 00:21:53.575 "target": "spare", 00:21:53.575 "progress": { 00:21:53.575 "blocks": 2816, 00:21:53.575 "percent": 35 00:21:53.575 } 00:21:53.575 }, 00:21:53.575 "base_bdevs_list": [ 00:21:53.575 { 00:21:53.575 "name": "spare", 00:21:53.575 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:53.575 "is_configured": true, 00:21:53.575 "data_offset": 256, 00:21:53.575 "data_size": 7936 00:21:53.575 }, 00:21:53.575 { 00:21:53.575 "name": "BaseBdev2", 00:21:53.575 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:53.575 "is_configured": true, 00:21:53.575 "data_offset": 256, 00:21:53.575 "data_size": 7936 00:21:53.575 } 00:21:53.575 ] 00:21:53.575 }' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.575 06:31:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:54.521 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.521 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.521 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.521 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.781 "name": "raid_bdev1", 00:21:54.781 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:54.781 "strip_size_kb": 0, 00:21:54.781 "state": "online", 00:21:54.781 "raid_level": "raid1", 00:21:54.781 "superblock": true, 00:21:54.781 "num_base_bdevs": 2, 00:21:54.781 "num_base_bdevs_discovered": 2, 00:21:54.781 "num_base_bdevs_operational": 2, 00:21:54.781 "process": { 00:21:54.781 "type": "rebuild", 00:21:54.781 "target": "spare", 00:21:54.781 "progress": { 00:21:54.781 "blocks": 5632, 00:21:54.781 "percent": 70 00:21:54.781 } 00:21:54.781 }, 00:21:54.781 "base_bdevs_list": [ 00:21:54.781 { 00:21:54.781 "name": "spare", 00:21:54.781 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:54.781 "is_configured": true, 00:21:54.781 "data_offset": 256, 00:21:54.781 "data_size": 7936 00:21:54.781 }, 00:21:54.781 { 00:21:54.781 "name": "BaseBdev2", 00:21:54.781 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:54.781 "is_configured": true, 00:21:54.781 "data_offset": 256, 00:21:54.781 "data_size": 7936 00:21:54.781 } 00:21:54.781 ] 00:21:54.781 }' 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.781 06:31:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.722 [2024-11-26 06:31:39.523041] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:55.722 [2024-11-26 06:31:39.523282] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:55.722 [2024-11-26 06:31:39.523481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.722 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.982 "name": "raid_bdev1", 00:21:55.982 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:55.982 "strip_size_kb": 0, 00:21:55.982 "state": "online", 00:21:55.982 "raid_level": "raid1", 00:21:55.982 "superblock": true, 00:21:55.982 "num_base_bdevs": 2, 00:21:55.982 "num_base_bdevs_discovered": 2, 00:21:55.982 "num_base_bdevs_operational": 2, 00:21:55.982 "base_bdevs_list": [ 00:21:55.982 { 00:21:55.982 "name": "spare", 00:21:55.982 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:55.982 "is_configured": true, 00:21:55.982 "data_offset": 256, 00:21:55.982 "data_size": 7936 00:21:55.982 }, 00:21:55.982 { 00:21:55.982 "name": "BaseBdev2", 00:21:55.982 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:55.982 "is_configured": true, 00:21:55.982 "data_offset": 256, 00:21:55.982 "data_size": 7936 00:21:55.982 } 00:21:55.982 ] 00:21:55.982 }' 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.982 "name": "raid_bdev1", 00:21:55.982 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:55.982 "strip_size_kb": 0, 00:21:55.982 "state": "online", 00:21:55.982 "raid_level": "raid1", 00:21:55.982 "superblock": true, 00:21:55.982 "num_base_bdevs": 2, 00:21:55.982 "num_base_bdevs_discovered": 2, 00:21:55.982 "num_base_bdevs_operational": 2, 00:21:55.982 "base_bdevs_list": [ 00:21:55.982 { 00:21:55.982 "name": "spare", 00:21:55.982 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:55.982 "is_configured": true, 00:21:55.982 "data_offset": 256, 00:21:55.982 "data_size": 7936 00:21:55.982 }, 00:21:55.982 { 00:21:55.982 "name": "BaseBdev2", 00:21:55.982 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:55.982 "is_configured": true, 00:21:55.982 "data_offset": 256, 00:21:55.982 "data_size": 7936 00:21:55.982 } 00:21:55.982 ] 00:21:55.982 }' 00:21:55.982 06:31:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.982 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.983 "name": "raid_bdev1", 00:21:55.983 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:55.983 "strip_size_kb": 0, 00:21:55.983 "state": "online", 00:21:55.983 "raid_level": "raid1", 00:21:55.983 "superblock": true, 00:21:55.983 "num_base_bdevs": 2, 00:21:55.983 "num_base_bdevs_discovered": 2, 00:21:55.983 "num_base_bdevs_operational": 2, 00:21:55.983 "base_bdevs_list": [ 00:21:55.983 { 00:21:55.983 "name": "spare", 00:21:55.983 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:55.983 "is_configured": true, 00:21:55.983 "data_offset": 256, 00:21:55.983 "data_size": 7936 00:21:55.983 }, 00:21:55.983 { 00:21:55.983 "name": "BaseBdev2", 00:21:55.983 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:55.983 "is_configured": true, 00:21:55.983 "data_offset": 256, 00:21:55.983 "data_size": 7936 00:21:55.983 } 00:21:55.983 ] 00:21:55.983 }' 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.983 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 [2024-11-26 06:31:40.472518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.553 [2024-11-26 06:31:40.472602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.553 [2024-11-26 06:31:40.472768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.553 [2024-11-26 06:31:40.472887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.553 [2024-11-26 06:31:40.472936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 [2024-11-26 06:31:40.540487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:56.553 [2024-11-26 06:31:40.540546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.553 [2024-11-26 06:31:40.540576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:56.553 [2024-11-26 06:31:40.540586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.553 [2024-11-26 06:31:40.542915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.553 [2024-11-26 06:31:40.542951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:56.553 [2024-11-26 06:31:40.543010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:56.553 [2024-11-26 06:31:40.543082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:56.553 [2024-11-26 06:31:40.543214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.553 spare 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 [2024-11-26 06:31:40.643112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:56.553 [2024-11-26 06:31:40.643146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:56.553 [2024-11-26 06:31:40.643252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:56.553 [2024-11-26 06:31:40.643349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:56.553 [2024-11-26 06:31:40.643357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:56.553 [2024-11-26 06:31:40.643446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.553 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.813 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.813 "name": "raid_bdev1", 00:21:56.813 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:56.813 "strip_size_kb": 0, 00:21:56.813 "state": "online", 00:21:56.813 "raid_level": "raid1", 00:21:56.813 "superblock": true, 00:21:56.813 "num_base_bdevs": 2, 00:21:56.813 "num_base_bdevs_discovered": 2, 00:21:56.813 "num_base_bdevs_operational": 2, 00:21:56.813 "base_bdevs_list": [ 00:21:56.813 { 00:21:56.813 "name": "spare", 00:21:56.813 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:56.813 "is_configured": true, 00:21:56.813 "data_offset": 256, 00:21:56.813 "data_size": 7936 00:21:56.813 }, 00:21:56.813 { 00:21:56.813 "name": "BaseBdev2", 00:21:56.813 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:56.813 "is_configured": true, 00:21:56.813 "data_offset": 256, 00:21:56.813 "data_size": 7936 00:21:56.813 } 00:21:56.813 ] 00:21:56.813 }' 00:21:56.813 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.813 06:31:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.073 "name": "raid_bdev1", 00:21:57.073 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:57.073 "strip_size_kb": 0, 00:21:57.073 "state": "online", 00:21:57.073 "raid_level": "raid1", 00:21:57.073 "superblock": true, 00:21:57.073 "num_base_bdevs": 2, 00:21:57.073 "num_base_bdevs_discovered": 2, 00:21:57.073 "num_base_bdevs_operational": 2, 00:21:57.073 "base_bdevs_list": [ 00:21:57.073 { 00:21:57.073 "name": "spare", 00:21:57.073 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:57.073 "is_configured": true, 00:21:57.073 "data_offset": 256, 00:21:57.073 "data_size": 7936 00:21:57.073 }, 00:21:57.073 { 00:21:57.073 "name": "BaseBdev2", 00:21:57.073 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:57.073 "is_configured": true, 00:21:57.073 "data_offset": 256, 00:21:57.073 "data_size": 7936 00:21:57.073 } 00:21:57.073 ] 00:21:57.073 }' 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:57.073 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.334 [2024-11-26 06:31:41.267424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.334 "name": "raid_bdev1", 00:21:57.334 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:57.334 "strip_size_kb": 0, 00:21:57.334 "state": "online", 00:21:57.334 "raid_level": "raid1", 00:21:57.334 "superblock": true, 00:21:57.334 "num_base_bdevs": 2, 00:21:57.334 "num_base_bdevs_discovered": 1, 00:21:57.334 "num_base_bdevs_operational": 1, 00:21:57.334 "base_bdevs_list": [ 00:21:57.334 { 00:21:57.334 "name": null, 00:21:57.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.334 "is_configured": false, 00:21:57.334 "data_offset": 0, 00:21:57.334 "data_size": 7936 00:21:57.334 }, 00:21:57.334 { 00:21:57.334 "name": "BaseBdev2", 00:21:57.334 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:57.334 "is_configured": true, 00:21:57.334 "data_offset": 256, 00:21:57.334 "data_size": 7936 00:21:57.334 } 00:21:57.334 ] 00:21:57.334 }' 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.334 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.594 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:57.594 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.594 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.594 [2024-11-26 06:31:41.686813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:57.594 [2024-11-26 06:31:41.687139] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:57.594 [2024-11-26 06:31:41.687204] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:57.594 [2024-11-26 06:31:41.687297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:57.594 [2024-11-26 06:31:41.705155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:57.594 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.594 06:31:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:57.594 [2024-11-26 06:31:41.707458] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:58.976 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:58.976 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.976 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:58.976 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.977 "name": "raid_bdev1", 00:21:58.977 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:58.977 "strip_size_kb": 0, 00:21:58.977 "state": "online", 00:21:58.977 "raid_level": "raid1", 00:21:58.977 "superblock": true, 00:21:58.977 "num_base_bdevs": 2, 00:21:58.977 "num_base_bdevs_discovered": 2, 00:21:58.977 "num_base_bdevs_operational": 2, 00:21:58.977 "process": { 00:21:58.977 "type": "rebuild", 00:21:58.977 "target": "spare", 00:21:58.977 "progress": { 00:21:58.977 "blocks": 2560, 00:21:58.977 "percent": 32 00:21:58.977 } 00:21:58.977 }, 00:21:58.977 "base_bdevs_list": [ 00:21:58.977 { 00:21:58.977 "name": "spare", 00:21:58.977 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:21:58.977 "is_configured": true, 00:21:58.977 "data_offset": 256, 00:21:58.977 "data_size": 7936 00:21:58.977 }, 00:21:58.977 { 00:21:58.977 "name": "BaseBdev2", 00:21:58.977 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:58.977 "is_configured": true, 00:21:58.977 "data_offset": 256, 00:21:58.977 "data_size": 7936 00:21:58.977 } 00:21:58.977 ] 00:21:58.977 }' 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.977 [2024-11-26 06:31:42.859172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:58.977 [2024-11-26 06:31:42.916777] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:58.977 [2024-11-26 06:31:42.916848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.977 [2024-11-26 06:31:42.916864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:58.977 [2024-11-26 06:31:42.916874] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.977 06:31:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.977 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.977 "name": "raid_bdev1", 00:21:58.977 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:21:58.977 "strip_size_kb": 0, 00:21:58.977 "state": "online", 00:21:58.977 "raid_level": "raid1", 00:21:58.977 "superblock": true, 00:21:58.977 "num_base_bdevs": 2, 00:21:58.977 "num_base_bdevs_discovered": 1, 00:21:58.977 "num_base_bdevs_operational": 1, 00:21:58.977 "base_bdevs_list": [ 00:21:58.977 { 00:21:58.977 "name": null, 00:21:58.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.977 "is_configured": false, 00:21:58.977 "data_offset": 0, 00:21:58.977 "data_size": 7936 00:21:58.977 }, 00:21:58.977 { 00:21:58.977 "name": "BaseBdev2", 00:21:58.977 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:21:58.977 "is_configured": true, 00:21:58.977 "data_offset": 256, 00:21:58.977 "data_size": 7936 00:21:58.977 } 00:21:58.977 ] 00:21:58.977 }' 00:21:58.977 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.977 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.547 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:59.547 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.547 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.547 [2024-11-26 06:31:43.397917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:59.547 [2024-11-26 06:31:43.398041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:59.547 [2024-11-26 06:31:43.398104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:59.547 [2024-11-26 06:31:43.398148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:59.547 [2024-11-26 06:31:43.398449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:59.547 [2024-11-26 06:31:43.398510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:59.547 [2024-11-26 06:31:43.398626] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:59.547 [2024-11-26 06:31:43.398668] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:59.547 [2024-11-26 06:31:43.398716] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:59.547 [2024-11-26 06:31:43.398803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.547 [2024-11-26 06:31:43.417187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:59.547 spare 00:21:59.547 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.547 06:31:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:59.547 [2024-11-26 06:31:43.419466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.486 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.486 "name": "raid_bdev1", 00:22:00.486 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:00.486 "strip_size_kb": 0, 00:22:00.486 "state": "online", 00:22:00.486 "raid_level": "raid1", 00:22:00.486 "superblock": true, 00:22:00.486 "num_base_bdevs": 2, 00:22:00.486 "num_base_bdevs_discovered": 2, 00:22:00.487 "num_base_bdevs_operational": 2, 00:22:00.487 "process": { 00:22:00.487 "type": "rebuild", 00:22:00.487 "target": "spare", 00:22:00.487 "progress": { 00:22:00.487 "blocks": 2560, 00:22:00.487 "percent": 32 00:22:00.487 } 00:22:00.487 }, 00:22:00.487 "base_bdevs_list": [ 00:22:00.487 { 00:22:00.487 "name": "spare", 00:22:00.487 "uuid": "7ef23c18-45e9-5b75-9efd-96ba317cb1f3", 00:22:00.487 "is_configured": true, 00:22:00.487 "data_offset": 256, 00:22:00.487 "data_size": 7936 00:22:00.487 }, 00:22:00.487 { 00:22:00.487 "name": "BaseBdev2", 00:22:00.487 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:00.487 "is_configured": true, 00:22:00.487 "data_offset": 256, 00:22:00.487 "data_size": 7936 00:22:00.487 } 00:22:00.487 ] 00:22:00.487 }' 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.487 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.487 [2024-11-26 06:31:44.566866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.747 [2024-11-26 06:31:44.628563] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:00.747 [2024-11-26 06:31:44.628674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.747 [2024-11-26 06:31:44.628696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.747 [2024-11-26 06:31:44.628704] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.747 "name": "raid_bdev1", 00:22:00.747 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:00.747 "strip_size_kb": 0, 00:22:00.747 "state": "online", 00:22:00.747 "raid_level": "raid1", 00:22:00.747 "superblock": true, 00:22:00.747 "num_base_bdevs": 2, 00:22:00.747 "num_base_bdevs_discovered": 1, 00:22:00.747 "num_base_bdevs_operational": 1, 00:22:00.747 "base_bdevs_list": [ 00:22:00.747 { 00:22:00.747 "name": null, 00:22:00.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.747 "is_configured": false, 00:22:00.747 "data_offset": 0, 00:22:00.747 "data_size": 7936 00:22:00.747 }, 00:22:00.747 { 00:22:00.747 "name": "BaseBdev2", 00:22:00.747 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:00.747 "is_configured": true, 00:22:00.747 "data_offset": 256, 00:22:00.747 "data_size": 7936 00:22:00.747 } 00:22:00.747 ] 00:22:00.747 }' 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.747 06:31:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.007 "name": "raid_bdev1", 00:22:01.007 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:01.007 "strip_size_kb": 0, 00:22:01.007 "state": "online", 00:22:01.007 "raid_level": "raid1", 00:22:01.007 "superblock": true, 00:22:01.007 "num_base_bdevs": 2, 00:22:01.007 "num_base_bdevs_discovered": 1, 00:22:01.007 "num_base_bdevs_operational": 1, 00:22:01.007 "base_bdevs_list": [ 00:22:01.007 { 00:22:01.007 "name": null, 00:22:01.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.007 "is_configured": false, 00:22:01.007 "data_offset": 0, 00:22:01.007 "data_size": 7936 00:22:01.007 }, 00:22:01.007 { 00:22:01.007 "name": "BaseBdev2", 00:22:01.007 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:01.007 "is_configured": true, 00:22:01.007 "data_offset": 256, 00:22:01.007 "data_size": 7936 00:22:01.007 } 00:22:01.007 ] 00:22:01.007 }' 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:01.007 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:01.267 [2024-11-26 06:31:45.177317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:01.267 [2024-11-26 06:31:45.177425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.267 [2024-11-26 06:31:45.177474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:01.267 [2024-11-26 06:31:45.177504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.267 [2024-11-26 06:31:45.177796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.267 [2024-11-26 06:31:45.177838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:01.267 [2024-11-26 06:31:45.177937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:01.267 [2024-11-26 06:31:45.177975] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:01.267 [2024-11-26 06:31:45.178039] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:01.267 [2024-11-26 06:31:45.178091] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:01.267 BaseBdev1 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.267 06:31:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.206 "name": "raid_bdev1", 00:22:02.206 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:02.206 "strip_size_kb": 0, 00:22:02.206 "state": "online", 00:22:02.206 "raid_level": "raid1", 00:22:02.206 "superblock": true, 00:22:02.206 "num_base_bdevs": 2, 00:22:02.206 "num_base_bdevs_discovered": 1, 00:22:02.206 "num_base_bdevs_operational": 1, 00:22:02.206 "base_bdevs_list": [ 00:22:02.206 { 00:22:02.206 "name": null, 00:22:02.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.206 "is_configured": false, 00:22:02.206 "data_offset": 0, 00:22:02.206 "data_size": 7936 00:22:02.206 }, 00:22:02.206 { 00:22:02.206 "name": "BaseBdev2", 00:22:02.206 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:02.206 "is_configured": true, 00:22:02.206 "data_offset": 256, 00:22:02.206 "data_size": 7936 00:22:02.206 } 00:22:02.206 ] 00:22:02.206 }' 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.206 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.775 "name": "raid_bdev1", 00:22:02.775 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:02.775 "strip_size_kb": 0, 00:22:02.775 "state": "online", 00:22:02.775 "raid_level": "raid1", 00:22:02.775 "superblock": true, 00:22:02.775 "num_base_bdevs": 2, 00:22:02.775 "num_base_bdevs_discovered": 1, 00:22:02.775 "num_base_bdevs_operational": 1, 00:22:02.775 "base_bdevs_list": [ 00:22:02.775 { 00:22:02.775 "name": null, 00:22:02.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.775 "is_configured": false, 00:22:02.775 "data_offset": 0, 00:22:02.775 "data_size": 7936 00:22:02.775 }, 00:22:02.775 { 00:22:02.775 "name": "BaseBdev2", 00:22:02.775 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:02.775 "is_configured": true, 00:22:02.775 "data_offset": 256, 00:22:02.775 "data_size": 7936 00:22:02.775 } 00:22:02.775 ] 00:22:02.775 }' 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:02.775 [2024-11-26 06:31:46.782685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.775 [2024-11-26 06:31:46.782890] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:02.775 [2024-11-26 06:31:46.782909] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:02.775 request: 00:22:02.775 { 00:22:02.775 "base_bdev": "BaseBdev1", 00:22:02.775 "raid_bdev": "raid_bdev1", 00:22:02.775 "method": "bdev_raid_add_base_bdev", 00:22:02.775 "req_id": 1 00:22:02.775 } 00:22:02.775 Got JSON-RPC error response 00:22:02.775 response: 00:22:02.775 { 00:22:02.775 "code": -22, 00:22:02.775 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:02.775 } 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.775 06:31:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.713 "name": "raid_bdev1", 00:22:03.713 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:03.713 "strip_size_kb": 0, 00:22:03.713 "state": "online", 00:22:03.713 "raid_level": "raid1", 00:22:03.713 "superblock": true, 00:22:03.713 "num_base_bdevs": 2, 00:22:03.713 "num_base_bdevs_discovered": 1, 00:22:03.713 "num_base_bdevs_operational": 1, 00:22:03.713 "base_bdevs_list": [ 00:22:03.713 { 00:22:03.713 "name": null, 00:22:03.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.713 "is_configured": false, 00:22:03.713 "data_offset": 0, 00:22:03.713 "data_size": 7936 00:22:03.713 }, 00:22:03.713 { 00:22:03.713 "name": "BaseBdev2", 00:22:03.713 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:03.713 "is_configured": true, 00:22:03.713 "data_offset": 256, 00:22:03.713 "data_size": 7936 00:22:03.713 } 00:22:03.713 ] 00:22:03.713 }' 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.713 06:31:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.284 "name": "raid_bdev1", 00:22:04.284 "uuid": "8b2d8b8c-04c0-4acd-bddd-92ccb0488534", 00:22:04.284 "strip_size_kb": 0, 00:22:04.284 "state": "online", 00:22:04.284 "raid_level": "raid1", 00:22:04.284 "superblock": true, 00:22:04.284 "num_base_bdevs": 2, 00:22:04.284 "num_base_bdevs_discovered": 1, 00:22:04.284 "num_base_bdevs_operational": 1, 00:22:04.284 "base_bdevs_list": [ 00:22:04.284 { 00:22:04.284 "name": null, 00:22:04.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.284 "is_configured": false, 00:22:04.284 "data_offset": 0, 00:22:04.284 "data_size": 7936 00:22:04.284 }, 00:22:04.284 { 00:22:04.284 "name": "BaseBdev2", 00:22:04.284 "uuid": "0822ada5-73db-5114-81bf-cebbb1a13ff8", 00:22:04.284 "is_configured": true, 00:22:04.284 "data_offset": 256, 00:22:04.284 "data_size": 7936 00:22:04.284 } 00:22:04.284 ] 00:22:04.284 }' 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:04.284 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89605 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89605 ']' 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89605 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89605 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89605' 00:22:04.285 killing process with pid 89605 00:22:04.285 Received shutdown signal, test time was about 60.000000 seconds 00:22:04.285 00:22:04.285 Latency(us) 00:22:04.285 [2024-11-26T06:31:48.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.285 [2024-11-26T06:31:48.422Z] =================================================================================================================== 00:22:04.285 [2024-11-26T06:31:48.422Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89605 00:22:04.285 [2024-11-26 06:31:48.403405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:04.285 06:31:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89605 00:22:04.285 [2024-11-26 06:31:48.403559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.285 [2024-11-26 06:31:48.403616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.285 [2024-11-26 06:31:48.403628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:04.870 [2024-11-26 06:31:48.729090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.826 06:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:22:05.826 00:22:05.826 real 0m17.532s 00:22:05.826 user 0m22.752s 00:22:05.826 sys 0m1.786s 00:22:05.826 06:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.826 06:31:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:05.826 ************************************ 00:22:05.826 END TEST raid_rebuild_test_sb_md_interleaved 00:22:05.826 ************************************ 00:22:05.826 06:31:49 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:22:05.826 06:31:49 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:22:05.826 06:31:49 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89605 ']' 00:22:05.826 06:31:49 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89605 00:22:06.084 06:31:49 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:22:06.084 00:22:06.084 real 12m32.601s 00:22:06.084 user 16m43.833s 00:22:06.084 sys 2m7.338s 00:22:06.084 ************************************ 00:22:06.084 END TEST bdev_raid 00:22:06.084 ************************************ 00:22:06.084 06:31:49 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.084 06:31:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.084 06:31:50 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:06.084 06:31:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:06.084 06:31:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.084 06:31:50 -- common/autotest_common.sh@10 -- # set +x 00:22:06.084 ************************************ 00:22:06.084 START TEST spdkcli_raid 00:22:06.084 ************************************ 00:22:06.084 06:31:50 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:06.084 * Looking for test storage... 00:22:06.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:06.085 06:31:50 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:06.085 06:31:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:06.085 06:31:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:06.345 06:31:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.345 06:31:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:22:06.345 06:31:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.345 06:31:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.345 --rc genhtml_branch_coverage=1 00:22:06.345 --rc genhtml_function_coverage=1 00:22:06.345 --rc genhtml_legend=1 00:22:06.345 --rc geninfo_all_blocks=1 00:22:06.345 --rc geninfo_unexecuted_blocks=1 00:22:06.345 00:22:06.345 ' 00:22:06.345 06:31:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.345 --rc genhtml_branch_coverage=1 00:22:06.345 --rc genhtml_function_coverage=1 00:22:06.345 --rc genhtml_legend=1 00:22:06.345 --rc geninfo_all_blocks=1 00:22:06.345 --rc geninfo_unexecuted_blocks=1 00:22:06.345 00:22:06.345 ' 00:22:06.345 06:31:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.345 --rc genhtml_branch_coverage=1 00:22:06.345 --rc genhtml_function_coverage=1 00:22:06.345 --rc genhtml_legend=1 00:22:06.345 --rc geninfo_all_blocks=1 00:22:06.345 --rc geninfo_unexecuted_blocks=1 00:22:06.345 00:22:06.345 ' 00:22:06.345 06:31:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:06.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.345 --rc genhtml_branch_coverage=1 00:22:06.345 --rc genhtml_function_coverage=1 00:22:06.345 --rc genhtml_legend=1 00:22:06.345 --rc geninfo_all_blocks=1 00:22:06.345 --rc geninfo_unexecuted_blocks=1 00:22:06.345 00:22:06.345 ' 00:22:06.345 06:31:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:06.345 06:31:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:06.345 06:31:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:06.345 06:31:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:22:06.345 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:22:06.346 06:31:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90282 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:22:06.346 06:31:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90282 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90282 ']' 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.346 06:31:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.346 [2024-11-26 06:31:50.400721] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:06.346 [2024-11-26 06:31:50.400914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90282 ] 00:22:06.606 [2024-11-26 06:31:50.585426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:06.606 [2024-11-26 06:31:50.721899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.606 [2024-11-26 06:31:50.721941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.987 06:31:51 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.987 06:31:51 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:22:07.987 06:31:51 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:07.987 06:31:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.987 06:31:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 06:31:51 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:07.987 06:31:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.987 06:31:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.987 06:31:51 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:07.987 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:07.987 ' 00:22:09.368 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:09.368 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:09.368 06:31:53 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:09.368 06:31:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.368 06:31:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:09.368 06:31:53 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:09.368 06:31:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.368 06:31:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:09.368 06:31:53 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:09.368 ' 00:22:10.749 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:10.749 06:31:54 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:10.749 06:31:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.749 06:31:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.749 06:31:54 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:10.749 06:31:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.749 06:31:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.749 06:31:54 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:10.749 06:31:54 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:11.320 06:31:55 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:11.320 06:31:55 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:11.320 06:31:55 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:11.320 06:31:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.320 06:31:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:11.320 06:31:55 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:11.320 06:31:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.320 06:31:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:11.320 06:31:55 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:11.320 ' 00:22:12.258 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:12.518 06:31:56 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:12.518 06:31:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.518 06:31:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.518 06:31:56 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:12.518 06:31:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.518 06:31:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.518 06:31:56 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:12.518 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:12.518 ' 00:22:13.899 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:13.899 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:13.899 06:31:57 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:13.899 06:31:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.899 06:31:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:13.899 06:31:57 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90282 00:22:13.899 06:31:57 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90282 ']' 00:22:13.899 06:31:57 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90282 00:22:13.899 06:31:57 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:22:13.899 06:31:58 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:13.899 06:31:58 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90282 00:22:14.159 killing process with pid 90282 00:22:14.159 06:31:58 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.159 06:31:58 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.159 06:31:58 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90282' 00:22:14.159 06:31:58 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90282 00:22:14.159 06:31:58 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90282 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90282 ']' 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90282 00:22:16.714 06:32:00 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90282 ']' 00:22:16.714 06:32:00 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90282 00:22:16.714 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90282) - No such process 00:22:16.714 Process with pid 90282 is not found 00:22:16.714 06:32:00 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90282 is not found' 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:16.714 06:32:00 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:16.714 ************************************ 00:22:16.714 END TEST spdkcli_raid 00:22:16.714 ************************************ 00:22:16.714 00:22:16.714 real 0m10.582s 00:22:16.714 user 0m21.564s 00:22:16.714 sys 0m1.335s 00:22:16.714 06:32:00 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.714 06:32:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:16.714 06:32:00 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:16.714 06:32:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.714 06:32:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.714 06:32:00 -- common/autotest_common.sh@10 -- # set +x 00:22:16.714 ************************************ 00:22:16.714 START TEST blockdev_raid5f 00:22:16.714 ************************************ 00:22:16.714 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:16.714 * Looking for test storage... 00:22:16.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:16.714 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:16.714 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:22:16.714 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:16.988 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.989 06:32:00 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:16.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.989 --rc genhtml_branch_coverage=1 00:22:16.989 --rc genhtml_function_coverage=1 00:22:16.989 --rc genhtml_legend=1 00:22:16.989 --rc geninfo_all_blocks=1 00:22:16.989 --rc geninfo_unexecuted_blocks=1 00:22:16.989 00:22:16.989 ' 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:16.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.989 --rc genhtml_branch_coverage=1 00:22:16.989 --rc genhtml_function_coverage=1 00:22:16.989 --rc genhtml_legend=1 00:22:16.989 --rc geninfo_all_blocks=1 00:22:16.989 --rc geninfo_unexecuted_blocks=1 00:22:16.989 00:22:16.989 ' 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:16.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.989 --rc genhtml_branch_coverage=1 00:22:16.989 --rc genhtml_function_coverage=1 00:22:16.989 --rc genhtml_legend=1 00:22:16.989 --rc geninfo_all_blocks=1 00:22:16.989 --rc geninfo_unexecuted_blocks=1 00:22:16.989 00:22:16.989 ' 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:16.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.989 --rc genhtml_branch_coverage=1 00:22:16.989 --rc genhtml_function_coverage=1 00:22:16.989 --rc genhtml_legend=1 00:22:16.989 --rc geninfo_all_blocks=1 00:22:16.989 --rc geninfo_unexecuted_blocks=1 00:22:16.989 00:22:16.989 ' 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90570 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:16.989 06:32:00 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90570 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90570 ']' 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.989 06:32:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:16.989 [2024-11-26 06:32:01.058862] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:16.989 [2024-11-26 06:32:01.059109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90570 ] 00:22:17.249 [2024-11-26 06:32:01.239772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.249 [2024-11-26 06:32:01.375539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:18.628 Malloc0 00:22:18.628 Malloc1 00:22:18.628 Malloc2 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "15ad0eb2-157e-4d6c-ac92-2133721b0cd3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "15ad0eb2-157e-4d6c-ac92-2133721b0cd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "15ad0eb2-157e-4d6c-ac92-2133721b0cd3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4f8a5027-a381-4cdf-82bc-1c1324242a34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3092149f-5970-4355-9b7a-61f7fee421dd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "01b960f5-e795-4f07-b03b-68c1897ad211",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:18.628 06:32:02 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90570 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90570 ']' 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90570 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90570 00:22:18.628 killing process with pid 90570 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90570' 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90570 00:22:18.628 06:32:02 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90570 00:22:21.925 06:32:05 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:21.925 06:32:05 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:21.925 06:32:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:21.925 06:32:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.925 06:32:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:21.925 ************************************ 00:22:21.925 START TEST bdev_hello_world 00:22:21.925 ************************************ 00:22:21.925 06:32:05 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:21.925 [2024-11-26 06:32:05.738503] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:21.925 [2024-11-26 06:32:05.738629] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90638 ] 00:22:21.925 [2024-11-26 06:32:05.918805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.184 [2024-11-26 06:32:06.056416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.754 [2024-11-26 06:32:06.647305] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:22.754 [2024-11-26 06:32:06.647359] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:22.754 [2024-11-26 06:32:06.647377] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:22.754 [2024-11-26 06:32:06.647896] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:22.754 [2024-11-26 06:32:06.648051] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:22.754 [2024-11-26 06:32:06.648079] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:22.754 [2024-11-26 06:32:06.648128] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:22.754 00:22:22.754 [2024-11-26 06:32:06.648146] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:24.136 00:22:24.136 real 0m2.494s 00:22:24.136 user 0m2.012s 00:22:24.136 sys 0m0.360s 00:22:24.136 ************************************ 00:22:24.136 END TEST bdev_hello_world 00:22:24.136 ************************************ 00:22:24.136 06:32:08 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.136 06:32:08 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:24.136 06:32:08 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:24.136 06:32:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.136 06:32:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.136 06:32:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:24.136 ************************************ 00:22:24.136 START TEST bdev_bounds 00:22:24.136 ************************************ 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90686 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90686' 00:22:24.136 Process bdevio pid: 90686 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90686 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90686 ']' 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.136 06:32:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:24.396 [2024-11-26 06:32:08.308971] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:24.396 [2024-11-26 06:32:08.309118] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90686 ] 00:22:24.396 [2024-11-26 06:32:08.478014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:24.656 [2024-11-26 06:32:08.595345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.656 [2024-11-26 06:32:08.595495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.656 [2024-11-26 06:32:08.595544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.225 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.225 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:22:25.225 06:32:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:25.225 I/O targets: 00:22:25.225 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:25.225 00:22:25.225 00:22:25.225 CUnit - A unit testing framework for C - Version 2.1-3 00:22:25.225 http://cunit.sourceforge.net/ 00:22:25.225 00:22:25.225 00:22:25.225 Suite: bdevio tests on: raid5f 00:22:25.225 Test: blockdev write read block ...passed 00:22:25.225 Test: blockdev write zeroes read block ...passed 00:22:25.225 Test: blockdev write zeroes read no split ...passed 00:22:25.225 Test: blockdev write zeroes read split ...passed 00:22:25.484 Test: blockdev write zeroes read split partial ...passed 00:22:25.484 Test: blockdev reset ...passed 00:22:25.484 Test: blockdev write read 8 blocks ...passed 00:22:25.484 Test: blockdev write read size > 128k ...passed 00:22:25.484 Test: blockdev write read invalid size ...passed 00:22:25.484 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:25.484 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:25.484 Test: blockdev write read max offset ...passed 00:22:25.484 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:25.484 Test: blockdev writev readv 8 blocks ...passed 00:22:25.484 Test: blockdev writev readv 30 x 1block ...passed 00:22:25.484 Test: blockdev writev readv block ...passed 00:22:25.484 Test: blockdev writev readv size > 128k ...passed 00:22:25.484 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:25.484 Test: blockdev comparev and writev ...passed 00:22:25.484 Test: blockdev nvme passthru rw ...passed 00:22:25.484 Test: blockdev nvme passthru vendor specific ...passed 00:22:25.484 Test: blockdev nvme admin passthru ...passed 00:22:25.484 Test: blockdev copy ...passed 00:22:25.484 00:22:25.484 Run Summary: Type Total Ran Passed Failed Inactive 00:22:25.484 suites 1 1 n/a 0 0 00:22:25.484 tests 23 23 23 0 0 00:22:25.485 asserts 130 130 130 0 n/a 00:22:25.485 00:22:25.485 Elapsed time = 0.601 seconds 00:22:25.485 0 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90686 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90686 ']' 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90686 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90686 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90686' 00:22:25.485 killing process with pid 90686 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90686 00:22:25.485 06:32:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90686 00:22:27.404 06:32:11 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:27.404 00:22:27.404 real 0m2.848s 00:22:27.404 user 0m7.026s 00:22:27.404 sys 0m0.414s 00:22:27.404 06:32:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.404 06:32:11 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:27.404 ************************************ 00:22:27.404 END TEST bdev_bounds 00:22:27.404 ************************************ 00:22:27.404 06:32:11 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:27.404 06:32:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:27.404 06:32:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.404 06:32:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:27.404 ************************************ 00:22:27.404 START TEST bdev_nbd 00:22:27.404 ************************************ 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90740 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90740 /var/tmp/spdk-nbd.sock 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90740 ']' 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:27.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.404 06:32:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:27.404 [2024-11-26 06:32:11.243409] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:27.404 [2024-11-26 06:32:11.244080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.404 [2024-11-26 06:32:11.409474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.665 [2024-11-26 06:32:11.544262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:28.233 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:28.493 1+0 records in 00:22:28.493 1+0 records out 00:22:28.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242146 s, 16.9 MB/s 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:28.493 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:28.753 { 00:22:28.753 "nbd_device": "/dev/nbd0", 00:22:28.753 "bdev_name": "raid5f" 00:22:28.753 } 00:22:28.753 ]' 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:28.753 { 00:22:28.753 "nbd_device": "/dev/nbd0", 00:22:28.753 "bdev_name": "raid5f" 00:22:28.753 } 00:22:28.753 ]' 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:28.753 06:32:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:29.013 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:29.014 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:29.273 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:29.274 /dev/nbd0 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.274 1+0 records in 00:22:29.274 1+0 records out 00:22:29.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564878 s, 7.3 MB/s 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:29.274 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:29.535 { 00:22:29.535 "nbd_device": "/dev/nbd0", 00:22:29.535 "bdev_name": "raid5f" 00:22:29.535 } 00:22:29.535 ]' 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:29.535 { 00:22:29.535 "nbd_device": "/dev/nbd0", 00:22:29.535 "bdev_name": "raid5f" 00:22:29.535 } 00:22:29.535 ]' 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:29.535 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:29.796 256+0 records in 00:22:29.796 256+0 records out 00:22:29.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144732 s, 72.4 MB/s 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:29.796 256+0 records in 00:22:29.796 256+0 records out 00:22:29.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311487 s, 33.7 MB/s 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.796 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:30.056 06:32:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:30.056 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:30.056 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:30.056 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:30.316 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:30.316 malloc_lvol_verify 00:22:30.576 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:30.576 c8d50228-827c-4fe6-8814-4c9ee9691f98 00:22:30.576 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:30.837 4fbecd22-9bba-417b-9e85-c2fb2eef7321 00:22:30.837 06:32:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:31.097 /dev/nbd0 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:31.097 mke2fs 1.47.0 (5-Feb-2023) 00:22:31.097 Discarding device blocks: 0/4096 done 00:22:31.097 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:31.097 00:22:31.097 Allocating group tables: 0/1 done 00:22:31.097 Writing inode tables: 0/1 done 00:22:31.097 Creating journal (1024 blocks): done 00:22:31.097 Writing superblocks and filesystem accounting information: 0/1 done 00:22:31.097 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:31.097 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90740 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90740 ']' 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90740 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90740 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90740' 00:22:31.357 killing process with pid 90740 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90740 00:22:31.357 06:32:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90740 00:22:33.267 ************************************ 00:22:33.267 END TEST bdev_nbd 00:22:33.267 ************************************ 00:22:33.267 06:32:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:33.267 00:22:33.267 real 0m5.888s 00:22:33.267 user 0m7.746s 00:22:33.267 sys 0m1.446s 00:22:33.267 06:32:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.267 06:32:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:33.267 06:32:17 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:33.267 06:32:17 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:22:33.267 06:32:17 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:22:33.267 06:32:17 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:33.267 06:32:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.267 06:32:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.267 06:32:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:33.267 ************************************ 00:22:33.267 START TEST bdev_fio 00:22:33.267 ************************************ 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:33.267 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:33.267 ************************************ 00:22:33.267 START TEST bdev_fio_rw_verify 00:22:33.267 ************************************ 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:22:33.267 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:33.268 06:32:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:33.527 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:33.527 fio-3.35 00:22:33.527 Starting 1 thread 00:22:45.766 00:22:45.766 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90950: Tue Nov 26 06:32:28 2024 00:22:45.766 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(470MiB/10001msec) 00:22:45.766 slat (nsec): min=16863, max=68022, avg=19886.39, stdev=2023.65 00:22:45.766 clat (usec): min=10, max=359, avg=132.23, stdev=47.46 00:22:45.766 lat (usec): min=29, max=379, avg=152.12, stdev=47.77 00:22:45.766 clat percentiles (usec): 00:22:45.766 | 50.000th=[ 135], 99.000th=[ 223], 99.900th=[ 262], 99.990th=[ 297], 00:22:45.766 | 99.999th=[ 326] 00:22:45.766 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(485MiB/9870msec); 0 zone resets 00:22:45.766 slat (usec): min=7, max=296, avg=16.77, stdev= 3.64 00:22:45.766 clat (usec): min=58, max=1346, avg=306.25, stdev=43.12 00:22:45.766 lat (usec): min=74, max=1610, avg=323.03, stdev=44.23 00:22:45.766 clat percentiles (usec): 00:22:45.766 | 50.000th=[ 310], 99.000th=[ 408], 99.900th=[ 570], 99.990th=[ 1123], 00:22:45.766 | 99.999th=[ 1270] 00:22:45.766 bw ( KiB/s): min=43984, max=54000, per=99.00%, avg=49798.32, stdev=2324.12, samples=19 00:22:45.766 iops : min=10996, max=13500, avg=12449.58, stdev=581.03, samples=19 00:22:45.766 lat (usec) : 20=0.01%, 50=0.01%, 100=15.50%, 250=38.72%, 500=45.70% 00:22:45.766 lat (usec) : 750=0.05%, 1000=0.02% 00:22:45.766 lat (msec) : 2=0.01% 00:22:45.766 cpu : usr=99.03%, sys=0.41%, ctx=26, majf=0, minf=9860 00:22:45.766 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:45.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.766 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.766 issued rwts: total=120331,124112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:45.766 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:45.766 00:22:45.766 Run status group 0 (all jobs): 00:22:45.766 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=470MiB (493MB), run=10001-10001msec 00:22:45.766 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=485MiB (508MB), run=9870-9870msec 00:22:46.337 ----------------------------------------------------- 00:22:46.337 Suppressions used: 00:22:46.337 count bytes template 00:22:46.337 1 7 /usr/src/fio/parse.c 00:22:46.337 61 5856 /usr/src/fio/iolog.c 00:22:46.337 1 8 libtcmalloc_minimal.so 00:22:46.337 1 904 libcrypto.so 00:22:46.337 ----------------------------------------------------- 00:22:46.337 00:22:46.337 00:22:46.337 real 0m13.007s 00:22:46.337 user 0m13.030s 00:22:46.337 sys 0m0.835s 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:46.337 ************************************ 00:22:46.337 END TEST bdev_fio_rw_verify 00:22:46.337 ************************************ 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "15ad0eb2-157e-4d6c-ac92-2133721b0cd3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "15ad0eb2-157e-4d6c-ac92-2133721b0cd3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "15ad0eb2-157e-4d6c-ac92-2133721b0cd3",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4f8a5027-a381-4cdf-82bc-1c1324242a34",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3092149f-5970-4355-9b7a-61f7fee421dd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "01b960f5-e795-4f07-b03b-68c1897ad211",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:46.337 /home/vagrant/spdk_repo/spdk 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:46.337 00:22:46.337 real 0m13.285s 00:22:46.337 user 0m13.134s 00:22:46.337 sys 0m0.960s 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.337 06:32:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:46.337 ************************************ 00:22:46.337 END TEST bdev_fio 00:22:46.337 ************************************ 00:22:46.337 06:32:30 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:46.337 06:32:30 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:46.337 06:32:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:46.337 06:32:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.337 06:32:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:46.337 ************************************ 00:22:46.337 START TEST bdev_verify 00:22:46.337 ************************************ 00:22:46.337 06:32:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:46.598 [2024-11-26 06:32:30.525693] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:46.598 [2024-11-26 06:32:30.525811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91108 ] 00:22:46.598 [2024-11-26 06:32:30.705620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:46.865 [2024-11-26 06:32:30.846902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.865 [2024-11-26 06:32:30.846937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.449 Running I/O for 5 seconds... 00:22:49.766 16381.00 IOPS, 63.99 MiB/s [2024-11-26T06:32:34.472Z] 14514.50 IOPS, 56.70 MiB/s [2024-11-26T06:32:35.851Z] 13179.33 IOPS, 51.48 MiB/s [2024-11-26T06:32:36.819Z] 12485.50 IOPS, 48.77 MiB/s [2024-11-26T06:32:36.819Z] 12090.20 IOPS, 47.23 MiB/s 00:22:52.682 Latency(us) 00:22:52.682 [2024-11-26T06:32:36.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.682 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:52.682 Verification LBA range: start 0x0 length 0x2000 00:22:52.682 raid5f : 5.02 6829.04 26.68 0.00 0.00 28208.97 472.20 22665.73 00:22:52.682 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:52.682 Verification LBA range: start 0x2000 length 0x2000 00:22:52.682 raid5f : 5.02 5264.80 20.57 0.00 0.00 36677.87 140.41 41439.36 00:22:52.682 [2024-11-26T06:32:36.819Z] =================================================================================================================== 00:22:52.682 [2024-11-26T06:32:36.819Z] Total : 12093.84 47.24 0.00 0.00 31898.93 140.41 41439.36 00:22:54.063 00:22:54.063 real 0m7.497s 00:22:54.063 user 0m13.736s 00:22:54.063 sys 0m0.387s 00:22:54.063 06:32:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.063 06:32:37 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:54.063 ************************************ 00:22:54.063 END TEST bdev_verify 00:22:54.063 ************************************ 00:22:54.063 06:32:37 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:54.063 06:32:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:54.063 06:32:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.063 06:32:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:54.063 ************************************ 00:22:54.063 START TEST bdev_verify_big_io 00:22:54.063 ************************************ 00:22:54.063 06:32:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:54.063 [2024-11-26 06:32:38.096103] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:22:54.063 [2024-11-26 06:32:38.096222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91208 ] 00:22:54.323 [2024-11-26 06:32:38.275912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:54.323 [2024-11-26 06:32:38.408405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.323 [2024-11-26 06:32:38.408488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.892 Running I/O for 5 seconds... 00:22:57.213 633.00 IOPS, 39.56 MiB/s [2024-11-26T06:32:42.291Z] 728.50 IOPS, 45.53 MiB/s [2024-11-26T06:32:43.230Z] 719.00 IOPS, 44.94 MiB/s [2024-11-26T06:32:44.169Z] 745.25 IOPS, 46.58 MiB/s [2024-11-26T06:32:44.429Z] 748.60 IOPS, 46.79 MiB/s 00:23:00.292 Latency(us) 00:23:00.292 [2024-11-26T06:32:44.429Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.292 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:00.292 Verification LBA range: start 0x0 length 0x200 00:23:00.292 raid5f : 5.23 437.06 27.32 0.00 0.00 7369026.47 234.31 320525.41 00:23:00.292 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:00.292 Verification LBA range: start 0x200 length 0x200 00:23:00.292 raid5f : 5.35 331.86 20.74 0.00 0.00 9540497.91 205.69 419430.40 00:23:00.292 [2024-11-26T06:32:44.429Z] =================================================================================================================== 00:23:00.292 [2024-11-26T06:32:44.429Z] Total : 768.92 48.06 0.00 0.00 8318443.82 205.69 419430.40 00:23:02.200 00:23:02.200 real 0m7.829s 00:23:02.200 user 0m14.423s 00:23:02.200 sys 0m0.365s 00:23:02.200 06:32:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.200 06:32:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:02.200 ************************************ 00:23:02.200 END TEST bdev_verify_big_io 00:23:02.200 ************************************ 00:23:02.200 06:32:45 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:02.200 06:32:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:02.200 06:32:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.200 06:32:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:02.200 ************************************ 00:23:02.200 START TEST bdev_write_zeroes 00:23:02.200 ************************************ 00:23:02.200 06:32:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:02.200 [2024-11-26 06:32:45.992100] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:23:02.200 [2024-11-26 06:32:45.992238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91305 ] 00:23:02.200 [2024-11-26 06:32:46.171191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.200 [2024-11-26 06:32:46.312558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.140 Running I/O for 1 seconds... 00:23:04.079 29199.00 IOPS, 114.06 MiB/s 00:23:04.079 Latency(us) 00:23:04.079 [2024-11-26T06:32:48.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.079 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:04.079 raid5f : 1.01 29164.41 113.92 0.00 0.00 4375.51 1430.92 6152.94 00:23:04.079 [2024-11-26T06:32:48.216Z] =================================================================================================================== 00:23:04.079 [2024-11-26T06:32:48.216Z] Total : 29164.41 113.92 0.00 0.00 4375.51 1430.92 6152.94 00:23:05.461 00:23:05.461 real 0m3.495s 00:23:05.461 user 0m3.008s 00:23:05.461 sys 0m0.358s 00:23:05.461 06:32:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.461 06:32:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:05.461 ************************************ 00:23:05.461 END TEST bdev_write_zeroes 00:23:05.461 ************************************ 00:23:05.461 06:32:49 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:05.461 06:32:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:05.461 06:32:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.461 06:32:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:05.461 ************************************ 00:23:05.461 START TEST bdev_json_nonenclosed 00:23:05.461 ************************************ 00:23:05.461 06:32:49 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:05.461 [2024-11-26 06:32:49.554493] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:23:05.461 [2024-11-26 06:32:49.554635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91364 ] 00:23:05.721 [2024-11-26 06:32:49.731612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.980 [2024-11-26 06:32:49.871317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.980 [2024-11-26 06:32:49.871430] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:05.980 [2024-11-26 06:32:49.871461] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:05.980 [2024-11-26 06:32:49.871477] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:06.240 00:23:06.240 real 0m0.668s 00:23:06.240 user 0m0.407s 00:23:06.240 sys 0m0.155s 00:23:06.240 06:32:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.240 06:32:50 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:06.240 ************************************ 00:23:06.240 END TEST bdev_json_nonenclosed 00:23:06.240 ************************************ 00:23:06.240 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.240 06:32:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:06.240 06:32:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.240 06:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:06.240 ************************************ 00:23:06.240 START TEST bdev_json_nonarray 00:23:06.240 ************************************ 00:23:06.240 06:32:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:06.240 [2024-11-26 06:32:50.292578] Starting SPDK v25.01-pre git sha1 8afd1c921 / DPDK 24.03.0 initialization... 00:23:06.240 [2024-11-26 06:32:50.292708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91389 ] 00:23:06.500 [2024-11-26 06:32:50.471526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.500 [2024-11-26 06:32:50.600751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.500 [2024-11-26 06:32:50.600881] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:06.500 [2024-11-26 06:32:50.600901] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:06.500 [2024-11-26 06:32:50.600921] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:06.761 00:23:06.761 real 0m0.667s 00:23:06.761 user 0m0.410s 00:23:06.761 sys 0m0.153s 00:23:06.761 06:32:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.761 06:32:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:06.761 ************************************ 00:23:06.761 END TEST bdev_json_nonarray 00:23:06.761 ************************************ 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:23:07.021 06:32:50 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:23:07.021 ************************************ 00:23:07.021 END TEST blockdev_raid5f 00:23:07.021 ************************************ 00:23:07.021 00:23:07.021 real 0m50.233s 00:23:07.021 user 1m6.710s 00:23:07.021 sys 0m5.879s 00:23:07.021 06:32:50 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.021 06:32:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 06:32:50 -- spdk/autotest.sh@194 -- # uname -s 00:23:07.021 06:32:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:07.021 06:32:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:07.021 06:32:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:07.021 06:32:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:23:07.021 06:32:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.021 06:32:50 -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 06:32:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:23:07.021 06:32:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:23:07.022 06:32:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:23:07.022 06:32:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:23:07.022 06:32:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:23:07.022 06:32:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:23:07.022 06:32:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:23:07.022 06:32:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.022 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:23:07.022 06:32:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:23:07.022 06:32:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:23:07.022 06:32:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:23:07.022 06:32:51 -- common/autotest_common.sh@10 -- # set +x 00:23:08.932 INFO: APP EXITING 00:23:08.932 INFO: killing all VMs 00:23:08.932 INFO: killing vhost app 00:23:08.932 INFO: EXIT DONE 00:23:09.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:09.502 Waiting for block devices as requested 00:23:09.502 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:09.762 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:10.701 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:10.701 Cleaning 00:23:10.701 Removing: /var/run/dpdk/spdk0/config 00:23:10.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:10.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:10.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:10.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:10.701 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:10.701 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:10.701 Removing: /dev/shm/spdk_tgt_trace.pid57152 00:23:10.701 Removing: /var/run/dpdk/spdk0 00:23:10.701 Removing: /var/run/dpdk/spdk_pid56911 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57152 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57392 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57496 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57552 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57686 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57708 00:23:10.701 Removing: /var/run/dpdk/spdk_pid57914 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58030 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58138 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58265 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58373 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58413 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58455 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58525 00:23:10.701 Removing: /var/run/dpdk/spdk_pid58648 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59118 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59194 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59273 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59295 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59454 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59470 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59630 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59646 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59717 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59745 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59809 00:23:10.701 Removing: /var/run/dpdk/spdk_pid59827 00:23:10.701 Removing: /var/run/dpdk/spdk_pid60033 00:23:10.701 Removing: /var/run/dpdk/spdk_pid60064 00:23:10.701 Removing: /var/run/dpdk/spdk_pid60153 00:23:10.701 Removing: /var/run/dpdk/spdk_pid61558 00:23:10.701 Removing: /var/run/dpdk/spdk_pid61770 00:23:10.701 Removing: /var/run/dpdk/spdk_pid61915 00:23:10.701 Removing: /var/run/dpdk/spdk_pid62574 00:23:10.961 Removing: /var/run/dpdk/spdk_pid62781 00:23:10.961 Removing: /var/run/dpdk/spdk_pid62931 00:23:10.961 Removing: /var/run/dpdk/spdk_pid63581 00:23:10.961 Removing: /var/run/dpdk/spdk_pid63921 00:23:10.961 Removing: /var/run/dpdk/spdk_pid64068 00:23:10.961 Removing: /var/run/dpdk/spdk_pid65464 00:23:10.961 Removing: /var/run/dpdk/spdk_pid65723 00:23:10.961 Removing: /var/run/dpdk/spdk_pid65868 00:23:10.961 Removing: /var/run/dpdk/spdk_pid67273 00:23:10.961 Removing: /var/run/dpdk/spdk_pid67532 00:23:10.961 Removing: /var/run/dpdk/spdk_pid67683 00:23:10.961 Removing: /var/run/dpdk/spdk_pid69081 00:23:10.961 Removing: /var/run/dpdk/spdk_pid69537 00:23:10.961 Removing: /var/run/dpdk/spdk_pid69686 00:23:10.961 Removing: /var/run/dpdk/spdk_pid71197 00:23:10.962 Removing: /var/run/dpdk/spdk_pid71467 00:23:10.962 Removing: /var/run/dpdk/spdk_pid71615 00:23:10.962 Removing: /var/run/dpdk/spdk_pid73126 00:23:10.962 Removing: /var/run/dpdk/spdk_pid73396 00:23:10.962 Removing: /var/run/dpdk/spdk_pid73542 00:23:10.962 Removing: /var/run/dpdk/spdk_pid75037 00:23:10.962 Removing: /var/run/dpdk/spdk_pid75531 00:23:10.962 Removing: /var/run/dpdk/spdk_pid75677 00:23:10.962 Removing: /var/run/dpdk/spdk_pid75826 00:23:10.962 Removing: /var/run/dpdk/spdk_pid76264 00:23:10.962 Removing: /var/run/dpdk/spdk_pid77005 00:23:10.962 Removing: /var/run/dpdk/spdk_pid77382 00:23:10.962 Removing: /var/run/dpdk/spdk_pid78082 00:23:10.962 Removing: /var/run/dpdk/spdk_pid78529 00:23:10.962 Removing: /var/run/dpdk/spdk_pid79295 00:23:10.962 Removing: /var/run/dpdk/spdk_pid79718 00:23:10.962 Removing: /var/run/dpdk/spdk_pid81697 00:23:10.962 Removing: /var/run/dpdk/spdk_pid82143 00:23:10.962 Removing: /var/run/dpdk/spdk_pid82589 00:23:10.962 Removing: /var/run/dpdk/spdk_pid84687 00:23:10.962 Removing: /var/run/dpdk/spdk_pid85173 00:23:10.962 Removing: /var/run/dpdk/spdk_pid85676 00:23:10.962 Removing: /var/run/dpdk/spdk_pid86743 00:23:10.962 Removing: /var/run/dpdk/spdk_pid87067 00:23:10.962 Removing: /var/run/dpdk/spdk_pid88010 00:23:10.962 Removing: /var/run/dpdk/spdk_pid88337 00:23:10.962 Removing: /var/run/dpdk/spdk_pid89282 00:23:10.962 Removing: /var/run/dpdk/spdk_pid89605 00:23:10.962 Removing: /var/run/dpdk/spdk_pid90282 00:23:10.962 Removing: /var/run/dpdk/spdk_pid90570 00:23:10.962 Removing: /var/run/dpdk/spdk_pid90638 00:23:10.962 Removing: /var/run/dpdk/spdk_pid90686 00:23:10.962 Removing: /var/run/dpdk/spdk_pid90929 00:23:10.962 Removing: /var/run/dpdk/spdk_pid91108 00:23:10.962 Removing: /var/run/dpdk/spdk_pid91208 00:23:10.962 Removing: /var/run/dpdk/spdk_pid91305 00:23:10.962 Removing: /var/run/dpdk/spdk_pid91364 00:23:10.962 Removing: /var/run/dpdk/spdk_pid91389 00:23:10.962 Clean 00:23:11.221 06:32:55 -- common/autotest_common.sh@1453 -- # return 0 00:23:11.221 06:32:55 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:23:11.221 06:32:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.221 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:23:11.221 06:32:55 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:23:11.221 06:32:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.221 06:32:55 -- common/autotest_common.sh@10 -- # set +x 00:23:11.221 06:32:55 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:11.221 06:32:55 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:11.221 06:32:55 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:11.221 06:32:55 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:23:11.221 06:32:55 -- spdk/autotest.sh@398 -- # hostname 00:23:11.221 06:32:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:11.480 geninfo: WARNING: invalid characters removed from testname! 00:23:38.123 06:33:17 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:38.123 06:33:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:38.693 06:33:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:41.232 06:33:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:43.142 06:33:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:45.685 06:33:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:47.593 06:33:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:47.593 06:33:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:47.593 06:33:31 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:47.593 06:33:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:47.593 06:33:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:47.593 06:33:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:47.593 + [[ -n 5424 ]] 00:23:47.593 + sudo kill 5424 00:23:47.602 [Pipeline] } 00:23:47.618 [Pipeline] // timeout 00:23:47.623 [Pipeline] } 00:23:47.638 [Pipeline] // stage 00:23:47.643 [Pipeline] } 00:23:47.659 [Pipeline] // catchError 00:23:47.668 [Pipeline] stage 00:23:47.670 [Pipeline] { (Stop VM) 00:23:47.683 [Pipeline] sh 00:23:47.969 + vagrant halt 00:23:50.510 ==> default: Halting domain... 00:23:58.674 [Pipeline] sh 00:23:58.960 + vagrant destroy -f 00:24:01.531 ==> default: Removing domain... 00:24:01.545 [Pipeline] sh 00:24:01.829 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:24:01.839 [Pipeline] } 00:24:01.856 [Pipeline] // stage 00:24:01.862 [Pipeline] } 00:24:01.878 [Pipeline] // dir 00:24:01.884 [Pipeline] } 00:24:01.900 [Pipeline] // wrap 00:24:01.907 [Pipeline] } 00:24:01.922 [Pipeline] // catchError 00:24:01.932 [Pipeline] stage 00:24:01.935 [Pipeline] { (Epilogue) 00:24:01.948 [Pipeline] sh 00:24:02.234 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:06.445 [Pipeline] catchError 00:24:06.446 [Pipeline] { 00:24:06.459 [Pipeline] sh 00:24:06.741 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:06.741 Artifacts sizes are good 00:24:06.751 [Pipeline] } 00:24:06.768 [Pipeline] // catchError 00:24:06.780 [Pipeline] archiveArtifacts 00:24:06.787 Archiving artifacts 00:24:06.893 [Pipeline] cleanWs 00:24:06.915 [WS-CLEANUP] Deleting project workspace... 00:24:06.915 [WS-CLEANUP] Deferred wipeout is used... 00:24:06.922 [WS-CLEANUP] done 00:24:06.924 [Pipeline] } 00:24:06.940 [Pipeline] // stage 00:24:06.946 [Pipeline] } 00:24:06.960 [Pipeline] // node 00:24:06.967 [Pipeline] End of Pipeline 00:24:07.007 Finished: SUCCESS